00:00:00.000 Started by upstream project "autotest-per-patch" build number 126177 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23934 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.136 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.236 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/10 # timeout=5 00:00:04.408 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.418 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.429 Checking out Revision d49304e16352441ae7eebb2419125dd094201f3e (FETCH_HEAD) 00:00:04.429 > git config core.sparsecheckout # timeout=10 00:00:04.439 > git read-tree -mu HEAD # timeout=10 00:00:04.456 > git checkout -f d49304e16352441ae7eebb2419125dd094201f3e # timeout=5 00:00:04.479 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:04.479 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.589 [Pipeline] Start of Pipeline 00:00:04.600 [Pipeline] library 00:00:04.602 Loading library shm_lib@master 00:00:04.602 Library shm_lib@master is cached. Copying from home. 00:00:04.620 [Pipeline] node 00:00:04.627 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.631 [Pipeline] { 00:00:04.642 [Pipeline] catchError 00:00:04.644 [Pipeline] { 00:00:04.659 [Pipeline] wrap 00:00:04.670 [Pipeline] { 00:00:04.677 [Pipeline] stage 00:00:04.679 [Pipeline] { (Prologue) 00:00:04.855 [Pipeline] sh 00:00:05.136 + logger -p user.info -t JENKINS-CI 00:00:05.158 [Pipeline] echo 00:00:05.160 Node: WFP8 00:00:05.166 [Pipeline] sh 00:00:05.463 [Pipeline] setCustomBuildProperty 00:00:05.473 [Pipeline] echo 00:00:05.474 Cleanup processes 00:00:05.477 [Pipeline] sh 00:00:05.755 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.755 1421782 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.767 [Pipeline] sh 00:00:06.048 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.048 ++ grep -v 'sudo pgrep' 00:00:06.048 ++ awk '{print $1}' 00:00:06.048 + sudo kill -9 00:00:06.048 + true 00:00:06.059 [Pipeline] cleanWs 00:00:06.067 [WS-CLEANUP] Deleting project workspace... 00:00:06.067 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.075 [WS-CLEANUP] done 00:00:06.079 [Pipeline] setCustomBuildProperty 00:00:06.091 [Pipeline] sh 00:00:06.367 + sudo git config --global --replace-all safe.directory '*' 00:00:06.433 [Pipeline] httpRequest 00:00:06.468 [Pipeline] echo 00:00:06.469 Sorcerer 10.211.164.101 is alive 00:00:06.475 [Pipeline] httpRequest 00:00:06.480 HttpMethod: GET 00:00:06.480 URL: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:06.481 Sending request to url: http://10.211.164.101/packages/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:06.505 Response Code: HTTP/1.1 200 OK 00:00:06.506 Success: Status code 200 is in the accepted range: 200,404 00:00:06.506 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:20.936 [Pipeline] sh 00:00:21.220 + tar --no-same-owner -xf jbp_d49304e16352441ae7eebb2419125dd094201f3e.tar.gz 00:00:21.242 [Pipeline] httpRequest 00:00:21.270 [Pipeline] echo 00:00:21.272 Sorcerer 10.211.164.101 is alive 00:00:21.285 [Pipeline] httpRequest 00:00:21.290 HttpMethod: GET 00:00:21.290 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:21.291 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:21.307 Response Code: HTTP/1.1 200 OK 00:00:21.307 Success: Status code 200 is in the accepted range: 200,404 00:00:21.307 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:54.154 [Pipeline] sh 00:00:54.435 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:56.980 [Pipeline] sh 00:00:57.263 + git -C spdk log --oneline -n5 00:00:57.263 2728651ee accel: adjust task per ch define name 00:00:57.263 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:57.263 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:57.263 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:57.263 719d03c6a sock/uring: only register net impl if supported 00:00:57.275 [Pipeline] } 00:00:57.294 [Pipeline] // stage 00:00:57.303 [Pipeline] stage 00:00:57.305 [Pipeline] { (Prepare) 00:00:57.327 [Pipeline] writeFile 00:00:57.346 [Pipeline] sh 00:00:57.631 + logger -p user.info -t JENKINS-CI 00:00:57.644 [Pipeline] sh 00:00:57.929 + logger -p user.info -t JENKINS-CI 00:00:57.942 [Pipeline] sh 00:00:58.225 + cat autorun-spdk.conf 00:00:58.225 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.225 SPDK_TEST_NVMF=1 00:00:58.225 SPDK_TEST_NVME_CLI=1 00:00:58.225 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.225 SPDK_TEST_NVMF_NICS=e810 00:00:58.225 SPDK_TEST_VFIOUSER=1 00:00:58.225 SPDK_RUN_UBSAN=1 00:00:58.225 NET_TYPE=phy 00:00:58.232 RUN_NIGHTLY=0 00:00:58.238 [Pipeline] readFile 00:00:58.270 [Pipeline] withEnv 00:00:58.272 [Pipeline] { 00:00:58.286 [Pipeline] sh 00:00:58.570 + set -ex 00:00:58.570 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.570 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.570 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.570 ++ SPDK_TEST_NVMF=1 00:00:58.570 ++ SPDK_TEST_NVME_CLI=1 00:00:58.571 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.571 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.571 ++ SPDK_TEST_VFIOUSER=1 00:00:58.571 ++ SPDK_RUN_UBSAN=1 00:00:58.571 ++ NET_TYPE=phy 00:00:58.571 ++ RUN_NIGHTLY=0 00:00:58.571 + case $SPDK_TEST_NVMF_NICS in 00:00:58.571 + DRIVERS=ice 00:00:58.571 + [[ tcp == \r\d\m\a ]] 00:00:58.571 + [[ -n ice ]] 00:00:58.571 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.571 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.571 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.571 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.571 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.571 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.571 + true 00:00:58.571 + for D in $DRIVERS 00:00:58.571 + sudo modprobe ice 00:00:58.571 + exit 0 00:00:58.580 [Pipeline] } 00:00:58.598 [Pipeline] // withEnv 00:00:58.603 [Pipeline] } 00:00:58.621 [Pipeline] // stage 00:00:58.633 [Pipeline] catchError 00:00:58.634 [Pipeline] { 00:00:58.649 [Pipeline] timeout 00:00:58.650 Timeout set to expire in 50 min 00:00:58.652 [Pipeline] { 00:00:58.667 [Pipeline] stage 00:00:58.669 [Pipeline] { (Tests) 00:00:58.688 [Pipeline] sh 00:00:58.980 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.980 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.980 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.980 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.980 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.980 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.980 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.980 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.980 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:58.980 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.980 + source /etc/os-release 00:00:58.980 ++ NAME='Fedora Linux' 00:00:58.980 ++ VERSION='38 (Cloud Edition)' 00:00:58.980 ++ ID=fedora 00:00:58.980 ++ VERSION_ID=38 00:00:58.980 ++ VERSION_CODENAME= 00:00:58.980 ++ PLATFORM_ID=platform:f38 00:00:58.980 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:58.980 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.980 ++ LOGO=fedora-logo-icon 00:00:58.980 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:58.980 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.980 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:58.980 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.980 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.980 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.980 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:58.980 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.980 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:58.980 ++ SUPPORT_END=2024-05-14 00:00:58.980 ++ VARIANT='Cloud Edition' 00:00:58.980 ++ VARIANT_ID=cloud 00:00:58.980 + uname -a 00:00:58.980 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:58.980 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.584 Hugepages 00:01:01.584 node hugesize free / total 00:01:01.584 node0 1048576kB 0 / 0 00:01:01.584 node0 2048kB 0 / 0 00:01:01.584 node1 1048576kB 0 / 0 00:01:01.585 node1 2048kB 0 / 0 00:01:01.585 00:01:01.585 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.585 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:01.585 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:01.585 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:01.585 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:01.585 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:01.585 + rm -f /tmp/spdk-ld-path 00:01:01.585 + source autorun-spdk.conf 00:01:01.585 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.585 ++ SPDK_TEST_NVMF=1 00:01:01.585 ++ SPDK_TEST_NVME_CLI=1 00:01:01.585 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.585 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.585 ++ SPDK_TEST_VFIOUSER=1 00:01:01.585 ++ SPDK_RUN_UBSAN=1 00:01:01.585 ++ NET_TYPE=phy 00:01:01.585 ++ RUN_NIGHTLY=0 00:01:01.585 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.585 + [[ -n '' ]] 00:01:01.585 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.585 + for M in /var/spdk/build-*-manifest.txt 00:01:01.585 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.585 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.585 + for M in /var/spdk/build-*-manifest.txt 00:01:01.585 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.585 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.585 ++ uname 00:01:01.585 + [[ Linux == \L\i\n\u\x ]] 00:01:01.585 + sudo dmesg -T 00:01:01.585 + sudo dmesg --clear 00:01:01.585 + dmesg_pid=1422707 00:01:01.585 + [[ Fedora Linux == FreeBSD ]] 00:01:01.585 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.585 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.585 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.585 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.585 + sudo dmesg -Tw 00:01:01.585 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.585 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.585 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.585 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.585 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.585 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.585 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.585 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.585 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.585 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.585 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.585 Test configuration: 00:01:01.585 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.585 SPDK_TEST_NVMF=1 00:01:01.585 SPDK_TEST_NVME_CLI=1 00:01:01.585 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.585 SPDK_TEST_NVMF_NICS=e810 00:01:01.585 SPDK_TEST_VFIOUSER=1 00:01:01.585 SPDK_RUN_UBSAN=1 00:01:01.585 NET_TYPE=phy 00:01:01.844 RUN_NIGHTLY=0 12:35:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.844 12:35:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.844 12:35:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.844 12:35:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.844 12:35:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.844 12:35:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.844 12:35:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.844 12:35:32 -- paths/export.sh@5 -- $ export PATH 00:01:01.844 12:35:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.844 12:35:32 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.844 12:35:32 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:01.844 12:35:32 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721039732.XXXXXX 00:01:01.844 12:35:32 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721039732.oKZIiI 00:01:01.844 12:35:32 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:01.844 12:35:32 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:01.844 12:35:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.844 12:35:32 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.844 12:35:32 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.844 12:35:32 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:01.844 12:35:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:01.844 12:35:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.844 12:35:32 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.844 12:35:32 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:01.844 12:35:32 -- pm/common@17 -- $ local monitor 00:01:01.844 12:35:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.844 12:35:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.844 12:35:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.844 12:35:32 -- pm/common@21 -- $ date +%s 00:01:01.844 12:35:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.844 12:35:32 -- pm/common@21 -- $ date +%s 00:01:01.844 12:35:32 -- pm/common@25 -- $ sleep 1 00:01:01.844 12:35:32 -- pm/common@21 -- $ date +%s 00:01:01.844 12:35:32 -- pm/common@21 -- $ date +%s 00:01:01.844 12:35:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039732 00:01:01.844 12:35:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039732 00:01:01.844 12:35:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039732 00:01:01.844 12:35:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721039732 00:01:01.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039732_collect-vmstat.pm.log 00:01:01.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039732_collect-cpu-load.pm.log 00:01:01.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039732_collect-cpu-temp.pm.log 00:01:01.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721039732_collect-bmc-pm.bmc.pm.log 00:01:02.781 12:35:33 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:02.781 12:35:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.781 12:35:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.781 12:35:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.781 12:35:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.781 Mon Jul 15 10:35:33 AM UTC 2024 00:01:02.781 12:35:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.781 v24.09-pre-206-g2728651ee 00:01:02.781 12:35:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.781 12:35:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.781 12:35:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.781 12:35:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:02.781 12:35:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:02.781 12:35:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.781 ************************************ 00:01:02.781 START TEST ubsan 00:01:02.781 ************************************ 00:01:02.781 12:35:33 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:02.781 using ubsan 00:01:02.781 00:01:02.781 real 0m0.000s 00:01:02.781 user 0m0.000s 00:01:02.781 sys 0m0.000s 00:01:02.781 12:35:33 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:02.781 12:35:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.781 ************************************ 00:01:02.781 END TEST ubsan 00:01:02.781 ************************************ 00:01:02.781 12:35:33 -- common/autotest_common.sh@1142 -- $ return 0 00:01:02.781 12:35:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.781 12:35:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.781 12:35:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.781 12:35:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.781 12:35:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.781 12:35:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.781 12:35:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.781 12:35:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.781 12:35:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:03.062 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.062 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.321 Using 'verbs' RDMA provider 00:01:16.471 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.688 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.688 Creating mk/config.mk...done. 00:01:28.688 Creating mk/cc.flags.mk...done. 00:01:28.688 Type 'make' to build. 00:01:28.688 12:35:58 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:28.688 12:35:58 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.688 12:35:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.688 12:35:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.688 ************************************ 00:01:28.688 START TEST make 00:01:28.688 ************************************ 00:01:28.688 12:35:58 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:28.688 make[1]: Nothing to be done for 'all'. 00:01:29.629 The Meson build system 00:01:29.629 Version: 1.3.1 00:01:29.629 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:29.629 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.629 Build type: native build 00:01:29.629 Project name: libvfio-user 00:01:29.629 Project version: 0.0.1 00:01:29.629 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.629 C linker for the host machine: cc ld.bfd 2.39-16 00:01:29.629 Host machine cpu family: x86_64 00:01:29.629 Host machine cpu: x86_64 00:01:29.629 Run-time dependency threads found: YES 00:01:29.629 Library dl found: YES 00:01:29.629 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.629 Run-time dependency json-c found: YES 0.17 00:01:29.629 Run-time dependency cmocka found: YES 1.1.7 00:01:29.629 Program pytest-3 found: NO 00:01:29.629 Program flake8 found: NO 00:01:29.629 Program misspell-fixer found: NO 00:01:29.629 Program restructuredtext-lint found: NO 00:01:29.629 Program valgrind found: YES (/usr/bin/valgrind) 00:01:29.629 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.629 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.629 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.629 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.629 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:29.629 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:29.629 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.629 Build targets in project: 8 00:01:29.629 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:29.629 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:29.629 00:01:29.629 libvfio-user 0.0.1 00:01:29.629 00:01:29.629 User defined options 00:01:29.629 buildtype : debug 00:01:29.629 default_library: shared 00:01:29.629 libdir : /usr/local/lib 00:01:29.629 00:01:29.629 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.196 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.196 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.196 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.196 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:30.196 [4/37] Compiling C object samples/null.p/null.c.o 00:01:30.196 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:30.196 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:30.196 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:30.196 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:30.196 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:30.196 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.196 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:30.197 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:30.197 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:30.197 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:30.197 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:30.197 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:30.197 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:30.197 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:30.197 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:30.197 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:30.197 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:30.197 [22/37] Compiling C object samples/server.p/server.c.o 00:01:30.197 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:30.197 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:30.197 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:30.197 [26/37] Compiling C object samples/client.p/client.c.o 00:01:30.197 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:30.197 [28/37] Linking target samples/client 00:01:30.197 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:30.455 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:30.456 [31/37] Linking target test/unit_tests 00:01:30.456 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:30.456 [33/37] Linking target samples/server 00:01:30.456 [34/37] Linking target samples/null 00:01:30.456 [35/37] Linking target samples/lspci 00:01:30.456 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:30.456 [37/37] Linking target samples/gpio-pci-idio-16 00:01:30.456 INFO: autodetecting backend as ninja 00:01:30.456 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.456 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.025 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.025 ninja: no work to do. 00:01:36.301 The Meson build system 00:01:36.301 Version: 1.3.1 00:01:36.301 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:36.301 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:36.301 Build type: native build 00:01:36.301 Program cat found: YES (/usr/bin/cat) 00:01:36.301 Project name: DPDK 00:01:36.301 Project version: 24.03.0 00:01:36.301 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.301 C linker for the host machine: cc ld.bfd 2.39-16 00:01:36.301 Host machine cpu family: x86_64 00:01:36.301 Host machine cpu: x86_64 00:01:36.301 Message: ## Building in Developer Mode ## 00:01:36.301 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.301 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.301 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.301 Program python3 found: YES (/usr/bin/python3) 00:01:36.301 Program cat found: YES (/usr/bin/cat) 00:01:36.301 Compiler for C supports arguments -march=native: YES 00:01:36.301 Checking for size of "void *" : 8 00:01:36.301 Checking for size of "void *" : 8 (cached) 00:01:36.301 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:36.301 Library m found: YES 00:01:36.301 Library numa found: YES 00:01:36.301 Has header "numaif.h" : YES 00:01:36.301 Library fdt found: NO 00:01:36.301 Library execinfo found: NO 00:01:36.301 Has header "execinfo.h" : YES 00:01:36.301 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.301 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.301 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.301 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.301 Run-time dependency openssl found: YES 3.0.9 00:01:36.301 Run-time dependency libpcap found: YES 1.10.4 00:01:36.301 Has header "pcap.h" with dependency libpcap: YES 00:01:36.301 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.301 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.301 Compiler for C supports arguments -Wformat: YES 00:01:36.301 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.301 Compiler for C supports arguments -Wformat-security: NO 00:01:36.301 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.301 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.301 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.301 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.301 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.301 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.301 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.301 Compiler for C supports arguments -Wundef: YES 00:01:36.301 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.301 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.301 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.301 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.301 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.301 Program objdump found: YES (/usr/bin/objdump) 00:01:36.301 Compiler for C supports arguments -mavx512f: YES 00:01:36.301 Checking if "AVX512 checking" compiles: YES 00:01:36.301 Fetching value of define "__SSE4_2__" : 1 00:01:36.301 Fetching value of define "__AES__" : 1 00:01:36.301 Fetching value of define "__AVX__" : 1 00:01:36.301 Fetching value of define "__AVX2__" : 1 00:01:36.301 Fetching value of define "__AVX512BW__" : 1 00:01:36.301 Fetching value of define "__AVX512CD__" : 1 00:01:36.301 Fetching value of define "__AVX512DQ__" : 1 00:01:36.301 Fetching value of define "__AVX512F__" : 1 00:01:36.301 Fetching value of define "__AVX512VL__" : 1 00:01:36.301 Fetching value of define "__PCLMUL__" : 1 00:01:36.301 Fetching value of define "__RDRND__" : 1 00:01:36.301 Fetching value of define "__RDSEED__" : 1 00:01:36.301 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.301 Fetching value of define "__znver1__" : (undefined) 00:01:36.301 Fetching value of define "__znver2__" : (undefined) 00:01:36.301 Fetching value of define "__znver3__" : (undefined) 00:01:36.301 Fetching value of define "__znver4__" : (undefined) 00:01:36.301 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.301 Message: lib/log: Defining dependency "log" 00:01:36.301 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.301 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.301 Checking for function "getentropy" : NO 00:01:36.301 Message: lib/eal: Defining dependency "eal" 00:01:36.301 Message: lib/ring: Defining dependency "ring" 00:01:36.301 Message: lib/rcu: Defining dependency "rcu" 00:01:36.301 Message: lib/mempool: Defining dependency "mempool" 00:01:36.301 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.301 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.301 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.301 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.301 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.301 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.301 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:36.301 Compiler for C supports arguments -mpclmul: YES 00:01:36.301 Compiler for C supports arguments -maes: YES 00:01:36.301 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.301 Compiler for C supports arguments -mavx512bw: YES 00:01:36.301 Compiler for C supports arguments -mavx512dq: YES 00:01:36.301 Compiler for C supports arguments -mavx512vl: YES 00:01:36.301 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.301 Compiler for C supports arguments -mavx2: YES 00:01:36.301 Compiler for C supports arguments -mavx: YES 00:01:36.301 Message: lib/net: Defining dependency "net" 00:01:36.301 Message: lib/meter: Defining dependency "meter" 00:01:36.301 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.301 Message: lib/pci: Defining dependency "pci" 00:01:36.301 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.301 Message: lib/hash: Defining dependency "hash" 00:01:36.301 Message: lib/timer: Defining dependency "timer" 00:01:36.301 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.301 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.301 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.301 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.301 Message: lib/power: Defining dependency "power" 00:01:36.301 Message: lib/reorder: Defining dependency "reorder" 00:01:36.301 Message: lib/security: Defining dependency "security" 00:01:36.301 Has header "linux/userfaultfd.h" : YES 00:01:36.301 Has header "linux/vduse.h" : YES 00:01:36.301 Message: lib/vhost: Defining dependency "vhost" 00:01:36.301 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.301 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.301 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.301 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.301 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.301 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.301 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.301 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.301 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.301 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.301 Program doxygen found: YES (/usr/bin/doxygen) 00:01:36.301 Configuring doxy-api-html.conf using configuration 00:01:36.301 Configuring doxy-api-man.conf using configuration 00:01:36.301 Program mandb found: YES (/usr/bin/mandb) 00:01:36.301 Program sphinx-build found: NO 00:01:36.301 Configuring rte_build_config.h using configuration 00:01:36.301 Message: 00:01:36.301 ================= 00:01:36.301 Applications Enabled 00:01:36.301 ================= 00:01:36.301 00:01:36.301 apps: 00:01:36.301 00:01:36.301 00:01:36.301 Message: 00:01:36.301 ================= 00:01:36.301 Libraries Enabled 00:01:36.301 ================= 00:01:36.301 00:01:36.301 libs: 00:01:36.301 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.301 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.301 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.301 00:01:36.301 Message: 00:01:36.301 =============== 00:01:36.301 Drivers Enabled 00:01:36.301 =============== 00:01:36.301 00:01:36.301 common: 00:01:36.301 00:01:36.301 bus: 00:01:36.301 pci, vdev, 00:01:36.301 mempool: 00:01:36.302 ring, 00:01:36.302 dma: 00:01:36.302 00:01:36.302 net: 00:01:36.302 00:01:36.302 crypto: 00:01:36.302 00:01:36.302 compress: 00:01:36.302 00:01:36.302 vdpa: 00:01:36.302 00:01:36.302 00:01:36.302 Message: 00:01:36.302 ================= 00:01:36.302 Content Skipped 00:01:36.302 ================= 00:01:36.302 00:01:36.302 apps: 00:01:36.302 dumpcap: explicitly disabled via build config 00:01:36.302 graph: explicitly disabled via build config 00:01:36.302 pdump: explicitly disabled via build config 00:01:36.302 proc-info: explicitly disabled via build config 00:01:36.302 test-acl: explicitly disabled via build config 00:01:36.302 test-bbdev: explicitly disabled via build config 00:01:36.302 test-cmdline: explicitly disabled via build config 00:01:36.302 test-compress-perf: explicitly disabled via build config 00:01:36.302 test-crypto-perf: explicitly disabled via build config 00:01:36.302 test-dma-perf: explicitly disabled via build config 00:01:36.302 test-eventdev: explicitly disabled via build config 00:01:36.302 test-fib: explicitly disabled via build config 00:01:36.302 test-flow-perf: explicitly disabled via build config 00:01:36.302 test-gpudev: explicitly disabled via build config 00:01:36.302 test-mldev: explicitly disabled via build config 00:01:36.302 test-pipeline: explicitly disabled via build config 00:01:36.302 test-pmd: explicitly disabled via build config 00:01:36.302 test-regex: explicitly disabled via build config 00:01:36.302 test-sad: explicitly disabled via build config 00:01:36.302 test-security-perf: explicitly disabled via build config 00:01:36.302 00:01:36.302 libs: 00:01:36.302 argparse: explicitly disabled via build config 00:01:36.302 metrics: explicitly disabled via build config 00:01:36.302 acl: explicitly disabled via build config 00:01:36.302 bbdev: explicitly disabled via build config 00:01:36.302 bitratestats: explicitly disabled via build config 00:01:36.302 bpf: explicitly disabled via build config 00:01:36.302 cfgfile: explicitly disabled via build config 00:01:36.302 distributor: explicitly disabled via build config 00:01:36.302 efd: explicitly disabled via build config 00:01:36.302 eventdev: explicitly disabled via build config 00:01:36.302 dispatcher: explicitly disabled via build config 00:01:36.302 gpudev: explicitly disabled via build config 00:01:36.302 gro: explicitly disabled via build config 00:01:36.302 gso: explicitly disabled via build config 00:01:36.302 ip_frag: explicitly disabled via build config 00:01:36.302 jobstats: explicitly disabled via build config 00:01:36.302 latencystats: explicitly disabled via build config 00:01:36.302 lpm: explicitly disabled via build config 00:01:36.302 member: explicitly disabled via build config 00:01:36.302 pcapng: explicitly disabled via build config 00:01:36.302 rawdev: explicitly disabled via build config 00:01:36.302 regexdev: explicitly disabled via build config 00:01:36.302 mldev: explicitly disabled via build config 00:01:36.302 rib: explicitly disabled via build config 00:01:36.302 sched: explicitly disabled via build config 00:01:36.302 stack: explicitly disabled via build config 00:01:36.302 ipsec: explicitly disabled via build config 00:01:36.302 pdcp: explicitly disabled via build config 00:01:36.302 fib: explicitly disabled via build config 00:01:36.302 port: explicitly disabled via build config 00:01:36.302 pdump: explicitly disabled via build config 00:01:36.302 table: explicitly disabled via build config 00:01:36.302 pipeline: explicitly disabled via build config 00:01:36.302 graph: explicitly disabled via build config 00:01:36.302 node: explicitly disabled via build config 00:01:36.302 00:01:36.302 drivers: 00:01:36.302 common/cpt: not in enabled drivers build config 00:01:36.302 common/dpaax: not in enabled drivers build config 00:01:36.302 common/iavf: not in enabled drivers build config 00:01:36.302 common/idpf: not in enabled drivers build config 00:01:36.302 common/ionic: not in enabled drivers build config 00:01:36.302 common/mvep: not in enabled drivers build config 00:01:36.302 common/octeontx: not in enabled drivers build config 00:01:36.302 bus/auxiliary: not in enabled drivers build config 00:01:36.302 bus/cdx: not in enabled drivers build config 00:01:36.302 bus/dpaa: not in enabled drivers build config 00:01:36.302 bus/fslmc: not in enabled drivers build config 00:01:36.302 bus/ifpga: not in enabled drivers build config 00:01:36.302 bus/platform: not in enabled drivers build config 00:01:36.302 bus/uacce: not in enabled drivers build config 00:01:36.302 bus/vmbus: not in enabled drivers build config 00:01:36.302 common/cnxk: not in enabled drivers build config 00:01:36.302 common/mlx5: not in enabled drivers build config 00:01:36.302 common/nfp: not in enabled drivers build config 00:01:36.302 common/nitrox: not in enabled drivers build config 00:01:36.302 common/qat: not in enabled drivers build config 00:01:36.302 common/sfc_efx: not in enabled drivers build config 00:01:36.302 mempool/bucket: not in enabled drivers build config 00:01:36.302 mempool/cnxk: not in enabled drivers build config 00:01:36.302 mempool/dpaa: not in enabled drivers build config 00:01:36.302 mempool/dpaa2: not in enabled drivers build config 00:01:36.302 mempool/octeontx: not in enabled drivers build config 00:01:36.302 mempool/stack: not in enabled drivers build config 00:01:36.302 dma/cnxk: not in enabled drivers build config 00:01:36.302 dma/dpaa: not in enabled drivers build config 00:01:36.302 dma/dpaa2: not in enabled drivers build config 00:01:36.302 dma/hisilicon: not in enabled drivers build config 00:01:36.302 dma/idxd: not in enabled drivers build config 00:01:36.302 dma/ioat: not in enabled drivers build config 00:01:36.302 dma/skeleton: not in enabled drivers build config 00:01:36.302 net/af_packet: not in enabled drivers build config 00:01:36.302 net/af_xdp: not in enabled drivers build config 00:01:36.302 net/ark: not in enabled drivers build config 00:01:36.302 net/atlantic: not in enabled drivers build config 00:01:36.302 net/avp: not in enabled drivers build config 00:01:36.302 net/axgbe: not in enabled drivers build config 00:01:36.302 net/bnx2x: not in enabled drivers build config 00:01:36.302 net/bnxt: not in enabled drivers build config 00:01:36.302 net/bonding: not in enabled drivers build config 00:01:36.302 net/cnxk: not in enabled drivers build config 00:01:36.302 net/cpfl: not in enabled drivers build config 00:01:36.302 net/cxgbe: not in enabled drivers build config 00:01:36.302 net/dpaa: not in enabled drivers build config 00:01:36.302 net/dpaa2: not in enabled drivers build config 00:01:36.302 net/e1000: not in enabled drivers build config 00:01:36.302 net/ena: not in enabled drivers build config 00:01:36.302 net/enetc: not in enabled drivers build config 00:01:36.302 net/enetfec: not in enabled drivers build config 00:01:36.302 net/enic: not in enabled drivers build config 00:01:36.302 net/failsafe: not in enabled drivers build config 00:01:36.302 net/fm10k: not in enabled drivers build config 00:01:36.302 net/gve: not in enabled drivers build config 00:01:36.302 net/hinic: not in enabled drivers build config 00:01:36.302 net/hns3: not in enabled drivers build config 00:01:36.302 net/i40e: not in enabled drivers build config 00:01:36.302 net/iavf: not in enabled drivers build config 00:01:36.302 net/ice: not in enabled drivers build config 00:01:36.302 net/idpf: not in enabled drivers build config 00:01:36.302 net/igc: not in enabled drivers build config 00:01:36.302 net/ionic: not in enabled drivers build config 00:01:36.302 net/ipn3ke: not in enabled drivers build config 00:01:36.302 net/ixgbe: not in enabled drivers build config 00:01:36.302 net/mana: not in enabled drivers build config 00:01:36.302 net/memif: not in enabled drivers build config 00:01:36.302 net/mlx4: not in enabled drivers build config 00:01:36.302 net/mlx5: not in enabled drivers build config 00:01:36.302 net/mvneta: not in enabled drivers build config 00:01:36.302 net/mvpp2: not in enabled drivers build config 00:01:36.302 net/netvsc: not in enabled drivers build config 00:01:36.302 net/nfb: not in enabled drivers build config 00:01:36.302 net/nfp: not in enabled drivers build config 00:01:36.302 net/ngbe: not in enabled drivers build config 00:01:36.302 net/null: not in enabled drivers build config 00:01:36.302 net/octeontx: not in enabled drivers build config 00:01:36.302 net/octeon_ep: not in enabled drivers build config 00:01:36.302 net/pcap: not in enabled drivers build config 00:01:36.302 net/pfe: not in enabled drivers build config 00:01:36.302 net/qede: not in enabled drivers build config 00:01:36.302 net/ring: not in enabled drivers build config 00:01:36.302 net/sfc: not in enabled drivers build config 00:01:36.302 net/softnic: not in enabled drivers build config 00:01:36.302 net/tap: not in enabled drivers build config 00:01:36.302 net/thunderx: not in enabled drivers build config 00:01:36.302 net/txgbe: not in enabled drivers build config 00:01:36.302 net/vdev_netvsc: not in enabled drivers build config 00:01:36.302 net/vhost: not in enabled drivers build config 00:01:36.302 net/virtio: not in enabled drivers build config 00:01:36.302 net/vmxnet3: not in enabled drivers build config 00:01:36.302 raw/*: missing internal dependency, "rawdev" 00:01:36.302 crypto/armv8: not in enabled drivers build config 00:01:36.302 crypto/bcmfs: not in enabled drivers build config 00:01:36.302 crypto/caam_jr: not in enabled drivers build config 00:01:36.302 crypto/ccp: not in enabled drivers build config 00:01:36.302 crypto/cnxk: not in enabled drivers build config 00:01:36.302 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.302 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.302 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.302 crypto/mlx5: not in enabled drivers build config 00:01:36.302 crypto/mvsam: not in enabled drivers build config 00:01:36.302 crypto/nitrox: not in enabled drivers build config 00:01:36.302 crypto/null: not in enabled drivers build config 00:01:36.302 crypto/octeontx: not in enabled drivers build config 00:01:36.302 crypto/openssl: not in enabled drivers build config 00:01:36.302 crypto/scheduler: not in enabled drivers build config 00:01:36.302 crypto/uadk: not in enabled drivers build config 00:01:36.302 crypto/virtio: not in enabled drivers build config 00:01:36.302 compress/isal: not in enabled drivers build config 00:01:36.302 compress/mlx5: not in enabled drivers build config 00:01:36.302 compress/nitrox: not in enabled drivers build config 00:01:36.302 compress/octeontx: not in enabled drivers build config 00:01:36.302 compress/zlib: not in enabled drivers build config 00:01:36.302 regex/*: missing internal dependency, "regexdev" 00:01:36.302 ml/*: missing internal dependency, "mldev" 00:01:36.302 vdpa/ifc: not in enabled drivers build config 00:01:36.302 vdpa/mlx5: not in enabled drivers build config 00:01:36.302 vdpa/nfp: not in enabled drivers build config 00:01:36.303 vdpa/sfc: not in enabled drivers build config 00:01:36.303 event/*: missing internal dependency, "eventdev" 00:01:36.303 baseband/*: missing internal dependency, "bbdev" 00:01:36.303 gpu/*: missing internal dependency, "gpudev" 00:01:36.303 00:01:36.303 00:01:36.303 Build targets in project: 85 00:01:36.303 00:01:36.303 DPDK 24.03.0 00:01:36.303 00:01:36.303 User defined options 00:01:36.303 buildtype : debug 00:01:36.303 default_library : shared 00:01:36.303 libdir : lib 00:01:36.303 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:36.303 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:36.303 c_link_args : 00:01:36.303 cpu_instruction_set: native 00:01:36.303 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:36.303 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:36.303 enable_docs : false 00:01:36.303 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:36.303 enable_kmods : false 00:01:36.303 max_lcores : 128 00:01:36.303 tests : false 00:01:36.303 00:01:36.303 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.570 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:36.570 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:36.570 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:36.570 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:36.570 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:36.834 [5/268] Linking static target lib/librte_kvargs.a 00:01:36.834 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.834 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:36.834 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:36.834 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:36.834 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:36.834 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:36.834 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.834 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:36.834 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:36.834 [15/268] Linking static target lib/librte_log.a 00:01:36.834 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:36.834 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:36.834 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.834 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:36.834 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.834 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.834 [22/268] Linking static target lib/librte_pci.a 00:01:36.834 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.834 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.097 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.097 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.097 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.097 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.097 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.097 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.097 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.097 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.097 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.097 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.097 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.097 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.097 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.097 [38/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.097 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.097 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.097 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.097 [42/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.097 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.097 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.097 [45/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.097 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.097 [47/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.097 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.097 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.097 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.097 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.097 [52/268] Linking static target lib/librte_meter.a 00:01:37.097 [53/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.356 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.356 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.356 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.356 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.356 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.356 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.356 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.356 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.356 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.356 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.356 [64/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.356 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.356 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.356 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.356 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.356 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.356 [70/268] Linking static target lib/librte_ring.a 00:01:37.356 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.356 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.356 [73/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.356 [74/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.356 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.356 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.356 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.356 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.356 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.356 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.356 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.356 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.356 [83/268] Linking static target lib/librte_telemetry.a 00:01:37.356 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.356 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.356 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.356 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.356 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.356 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.356 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.356 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.356 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.356 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.356 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.356 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.356 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.356 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.356 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.356 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.356 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.356 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.356 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.356 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.356 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.356 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.356 [106/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.356 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.356 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.356 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.356 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:37.356 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:37.356 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.356 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.356 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.356 [115/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:37.356 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.356 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.356 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:37.356 [119/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.356 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.356 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.356 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.356 [123/268] Linking static target lib/librte_net.a 00:01:37.356 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.356 [125/268] Linking static target lib/librte_mempool.a 00:01:37.356 [126/268] Linking static target lib/librte_eal.a 00:01:37.356 [127/268] Linking static target lib/librte_cmdline.a 00:01:37.356 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.356 [129/268] Linking static target lib/librte_rcu.a 00:01:37.356 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.356 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.356 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.615 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.615 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.615 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.615 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.615 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.615 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.615 [139/268] Linking target lib/librte_log.so.24.1 00:01:37.615 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.615 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:37.615 [142/268] Linking static target lib/librte_timer.a 00:01:37.615 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.615 [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.615 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.615 [146/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:37.615 [147/268] Linking static target lib/librte_mbuf.a 00:01:37.615 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.615 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.615 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.615 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.615 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:37.615 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:37.615 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.615 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.615 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.615 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.615 [158/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:37.615 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.615 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:37.616 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.616 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:37.616 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.616 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:37.616 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.616 [166/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.616 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:37.874 [168/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.874 [169/268] Linking target lib/librte_kvargs.so.24.1 00:01:37.874 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.874 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.874 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.874 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.874 [174/268] Linking static target lib/librte_power.a 00:01:37.874 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.874 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.874 [177/268] Linking target lib/librte_telemetry.so.24.1 00:01:37.874 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:37.874 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.874 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:37.874 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.874 [182/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.874 [183/268] Linking static target lib/librte_compressdev.a 00:01:37.874 [184/268] Linking static target lib/librte_dmadev.a 00:01:37.874 [185/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:37.874 [186/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:37.874 [187/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:37.874 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:37.874 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:37.874 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:37.874 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.874 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.874 [193/268] Linking static target drivers/librte_bus_vdev.a 00:01:37.874 [194/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.874 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:37.874 [196/268] Linking static target lib/librte_reorder.a 00:01:37.874 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.133 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.133 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.133 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.133 [201/268] Linking static target lib/librte_security.a 00:01:38.133 [202/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.133 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.133 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.133 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.133 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.133 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:38.133 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:38.133 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.133 [210/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.133 [211/268] Linking static target lib/librte_hash.a 00:01:38.133 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.133 [213/268] Linking static target lib/librte_cryptodev.a 00:01:38.133 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.393 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.393 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:38.393 [217/268] Linking static target lib/librte_ethdev.a 00:01:38.393 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.393 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.393 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.393 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.393 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.651 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.651 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.651 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:38.651 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.910 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.488 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:39.488 [229/268] Linking static target lib/librte_vhost.a 00:01:40.101 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.479 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.762 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.699 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.699 [234/268] Linking target lib/librte_eal.so.24.1 00:01:47.958 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:47.958 [236/268] Linking target lib/librte_ring.so.24.1 00:01:47.958 [237/268] Linking target lib/librte_meter.so.24.1 00:01:47.958 [238/268] Linking target lib/librte_timer.so.24.1 00:01:47.958 [239/268] Linking target lib/librte_pci.so.24.1 00:01:47.958 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:47.958 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:47.958 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:47.958 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:47.958 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:47.958 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:47.958 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:47.958 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:48.216 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:48.216 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:48.216 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:48.216 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:48.216 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:48.216 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:48.474 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.474 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:48.474 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:48.474 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:48.474 [258/268] Linking target lib/librte_net.so.24.1 00:01:48.474 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:48.474 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:48.733 [261/268] Linking target lib/librte_security.so.24.1 00:01:48.733 [262/268] Linking target lib/librte_hash.so.24.1 00:01:48.733 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:48.733 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:48.733 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:48.733 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:48.733 [267/268] Linking target lib/librte_power.so.24.1 00:01:48.733 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:48.733 INFO: autodetecting backend as ninja 00:01:48.733 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:50.111 CC lib/ut_mock/mock.o 00:01:50.111 CC lib/log/log.o 00:01:50.111 CC lib/log/log_flags.o 00:01:50.111 CC lib/log/log_deprecated.o 00:01:50.111 CC lib/ut/ut.o 00:01:50.111 LIB libspdk_log.a 00:01:50.111 LIB libspdk_ut_mock.a 00:01:50.111 LIB libspdk_ut.a 00:01:50.111 SO libspdk_log.so.7.0 00:01:50.111 SO libspdk_ut_mock.so.6.0 00:01:50.111 SO libspdk_ut.so.2.0 00:01:50.111 SYMLINK libspdk_ut_mock.so 00:01:50.111 SYMLINK libspdk_ut.so 00:01:50.111 SYMLINK libspdk_log.so 00:01:50.369 CXX lib/trace_parser/trace.o 00:01:50.369 CC lib/util/base64.o 00:01:50.369 CC lib/util/bit_array.o 00:01:50.369 CC lib/util/cpuset.o 00:01:50.369 CC lib/util/crc16.o 00:01:50.369 CC lib/util/crc32.o 00:01:50.369 CC lib/util/crc32c.o 00:01:50.369 CC lib/util/crc32_ieee.o 00:01:50.369 CC lib/ioat/ioat.o 00:01:50.369 CC lib/dma/dma.o 00:01:50.369 CC lib/util/crc64.o 00:01:50.369 CC lib/util/dif.o 00:01:50.369 CC lib/util/fd.o 00:01:50.369 CC lib/util/file.o 00:01:50.369 CC lib/util/hexlify.o 00:01:50.369 CC lib/util/iov.o 00:01:50.369 CC lib/util/math.o 00:01:50.369 CC lib/util/pipe.o 00:01:50.369 CC lib/util/strerror_tls.o 00:01:50.369 CC lib/util/string.o 00:01:50.369 CC lib/util/uuid.o 00:01:50.369 CC lib/util/fd_group.o 00:01:50.369 CC lib/util/xor.o 00:01:50.369 CC lib/util/zipf.o 00:01:50.628 CC lib/vfio_user/host/vfio_user_pci.o 00:01:50.628 CC lib/vfio_user/host/vfio_user.o 00:01:50.628 LIB libspdk_dma.a 00:01:50.628 SO libspdk_dma.so.4.0 00:01:50.628 LIB libspdk_ioat.a 00:01:50.628 SYMLINK libspdk_dma.so 00:01:50.628 SO libspdk_ioat.so.7.0 00:01:50.628 SYMLINK libspdk_ioat.so 00:01:50.909 LIB libspdk_vfio_user.a 00:01:50.909 SO libspdk_vfio_user.so.5.0 00:01:50.909 LIB libspdk_util.a 00:01:50.909 SYMLINK libspdk_vfio_user.so 00:01:50.909 SO libspdk_util.so.9.1 00:01:50.909 SYMLINK libspdk_util.so 00:01:51.168 LIB libspdk_trace_parser.a 00:01:51.168 SO libspdk_trace_parser.so.5.0 00:01:51.168 SYMLINK libspdk_trace_parser.so 00:01:51.426 CC lib/json/json_parse.o 00:01:51.426 CC lib/json/json_util.o 00:01:51.426 CC lib/json/json_write.o 00:01:51.426 CC lib/rdma_provider/common.o 00:01:51.427 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:51.427 CC lib/vmd/vmd.o 00:01:51.427 CC lib/rdma_utils/rdma_utils.o 00:01:51.427 CC lib/vmd/led.o 00:01:51.427 CC lib/idxd/idxd.o 00:01:51.427 CC lib/env_dpdk/env.o 00:01:51.427 CC lib/conf/conf.o 00:01:51.427 CC lib/idxd/idxd_user.o 00:01:51.427 CC lib/env_dpdk/memory.o 00:01:51.427 CC lib/idxd/idxd_kernel.o 00:01:51.427 CC lib/env_dpdk/pci.o 00:01:51.427 CC lib/env_dpdk/init.o 00:01:51.427 CC lib/env_dpdk/threads.o 00:01:51.427 CC lib/env_dpdk/pci_ioat.o 00:01:51.427 CC lib/env_dpdk/pci_virtio.o 00:01:51.427 CC lib/env_dpdk/pci_vmd.o 00:01:51.427 CC lib/env_dpdk/pci_idxd.o 00:01:51.427 CC lib/env_dpdk/pci_event.o 00:01:51.427 CC lib/env_dpdk/sigbus_handler.o 00:01:51.427 CC lib/env_dpdk/pci_dpdk.o 00:01:51.427 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:51.427 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:51.427 LIB libspdk_rdma_provider.a 00:01:51.685 LIB libspdk_conf.a 00:01:51.685 SO libspdk_rdma_provider.so.6.0 00:01:51.685 SO libspdk_conf.so.6.0 00:01:51.685 LIB libspdk_rdma_utils.a 00:01:51.685 LIB libspdk_json.a 00:01:51.685 SO libspdk_rdma_utils.so.1.0 00:01:51.685 SYMLINK libspdk_rdma_provider.so 00:01:51.685 SYMLINK libspdk_conf.so 00:01:51.685 SO libspdk_json.so.6.0 00:01:51.685 SYMLINK libspdk_rdma_utils.so 00:01:51.685 SYMLINK libspdk_json.so 00:01:51.685 LIB libspdk_idxd.a 00:01:51.942 SO libspdk_idxd.so.12.0 00:01:51.942 LIB libspdk_vmd.a 00:01:51.942 SO libspdk_vmd.so.6.0 00:01:51.942 SYMLINK libspdk_idxd.so 00:01:51.942 SYMLINK libspdk_vmd.so 00:01:51.942 CC lib/jsonrpc/jsonrpc_server.o 00:01:51.942 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:51.942 CC lib/jsonrpc/jsonrpc_client.o 00:01:51.942 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:52.200 LIB libspdk_jsonrpc.a 00:01:52.200 SO libspdk_jsonrpc.so.6.0 00:01:52.458 SYMLINK libspdk_jsonrpc.so 00:01:52.458 LIB libspdk_env_dpdk.a 00:01:52.459 SO libspdk_env_dpdk.so.14.1 00:01:52.717 SYMLINK libspdk_env_dpdk.so 00:01:52.717 CC lib/rpc/rpc.o 00:01:52.717 LIB libspdk_rpc.a 00:01:52.977 SO libspdk_rpc.so.6.0 00:01:52.977 SYMLINK libspdk_rpc.so 00:01:53.237 CC lib/keyring/keyring.o 00:01:53.237 CC lib/notify/notify.o 00:01:53.237 CC lib/keyring/keyring_rpc.o 00:01:53.237 CC lib/notify/notify_rpc.o 00:01:53.237 CC lib/trace/trace.o 00:01:53.237 CC lib/trace/trace_flags.o 00:01:53.237 CC lib/trace/trace_rpc.o 00:01:53.497 LIB libspdk_notify.a 00:01:53.497 LIB libspdk_keyring.a 00:01:53.497 SO libspdk_notify.so.6.0 00:01:53.497 LIB libspdk_trace.a 00:01:53.497 SO libspdk_keyring.so.1.0 00:01:53.497 SO libspdk_trace.so.10.0 00:01:53.497 SYMLINK libspdk_notify.so 00:01:53.497 SYMLINK libspdk_keyring.so 00:01:53.497 SYMLINK libspdk_trace.so 00:01:53.756 CC lib/sock/sock.o 00:01:53.756 CC lib/sock/sock_rpc.o 00:01:53.756 CC lib/thread/thread.o 00:01:53.756 CC lib/thread/iobuf.o 00:01:54.321 LIB libspdk_sock.a 00:01:54.321 SO libspdk_sock.so.10.0 00:01:54.321 SYMLINK libspdk_sock.so 00:01:54.579 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:54.579 CC lib/nvme/nvme_ctrlr.o 00:01:54.579 CC lib/nvme/nvme_fabric.o 00:01:54.579 CC lib/nvme/nvme_ns_cmd.o 00:01:54.579 CC lib/nvme/nvme_ns.o 00:01:54.579 CC lib/nvme/nvme_pcie_common.o 00:01:54.579 CC lib/nvme/nvme_pcie.o 00:01:54.579 CC lib/nvme/nvme_qpair.o 00:01:54.579 CC lib/nvme/nvme.o 00:01:54.579 CC lib/nvme/nvme_quirks.o 00:01:54.579 CC lib/nvme/nvme_transport.o 00:01:54.579 CC lib/nvme/nvme_discovery.o 00:01:54.579 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:54.579 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:54.579 CC lib/nvme/nvme_tcp.o 00:01:54.579 CC lib/nvme/nvme_opal.o 00:01:54.579 CC lib/nvme/nvme_io_msg.o 00:01:54.579 CC lib/nvme/nvme_poll_group.o 00:01:54.579 CC lib/nvme/nvme_zns.o 00:01:54.579 CC lib/nvme/nvme_stubs.o 00:01:54.579 CC lib/nvme/nvme_auth.o 00:01:54.579 CC lib/nvme/nvme_cuse.o 00:01:54.579 CC lib/nvme/nvme_vfio_user.o 00:01:54.579 CC lib/nvme/nvme_rdma.o 00:01:54.837 LIB libspdk_thread.a 00:01:54.837 SO libspdk_thread.so.10.1 00:01:55.095 SYMLINK libspdk_thread.so 00:01:55.352 CC lib/blob/blobstore.o 00:01:55.352 CC lib/blob/request.o 00:01:55.352 CC lib/blob/blob_bs_dev.o 00:01:55.352 CC lib/blob/zeroes.o 00:01:55.352 CC lib/vfu_tgt/tgt_endpoint.o 00:01:55.352 CC lib/accel/accel.o 00:01:55.352 CC lib/accel/accel_rpc.o 00:01:55.352 CC lib/accel/accel_sw.o 00:01:55.352 CC lib/init/json_config.o 00:01:55.352 CC lib/vfu_tgt/tgt_rpc.o 00:01:55.352 CC lib/init/subsystem.o 00:01:55.352 CC lib/init/subsystem_rpc.o 00:01:55.352 CC lib/init/rpc.o 00:01:55.352 CC lib/virtio/virtio.o 00:01:55.352 CC lib/virtio/virtio_vhost_user.o 00:01:55.352 CC lib/virtio/virtio_vfio_user.o 00:01:55.352 CC lib/virtio/virtio_pci.o 00:01:55.610 LIB libspdk_init.a 00:01:55.610 SO libspdk_init.so.5.0 00:01:55.610 LIB libspdk_vfu_tgt.a 00:01:55.610 LIB libspdk_virtio.a 00:01:55.610 SO libspdk_vfu_tgt.so.3.0 00:01:55.610 SYMLINK libspdk_init.so 00:01:55.610 SO libspdk_virtio.so.7.0 00:01:55.610 SYMLINK libspdk_vfu_tgt.so 00:01:55.610 SYMLINK libspdk_virtio.so 00:01:55.868 CC lib/event/app.o 00:01:55.868 CC lib/event/reactor.o 00:01:55.868 CC lib/event/log_rpc.o 00:01:55.868 CC lib/event/app_rpc.o 00:01:55.868 CC lib/event/scheduler_static.o 00:01:55.868 LIB libspdk_accel.a 00:01:56.126 SO libspdk_accel.so.15.1 00:01:56.126 SYMLINK libspdk_accel.so 00:01:56.126 LIB libspdk_nvme.a 00:01:56.126 LIB libspdk_event.a 00:01:56.384 SO libspdk_nvme.so.13.1 00:01:56.384 SO libspdk_event.so.14.0 00:01:56.384 SYMLINK libspdk_event.so 00:01:56.384 CC lib/bdev/bdev.o 00:01:56.384 CC lib/bdev/bdev_rpc.o 00:01:56.384 CC lib/bdev/bdev_zone.o 00:01:56.384 CC lib/bdev/part.o 00:01:56.384 CC lib/bdev/scsi_nvme.o 00:01:56.384 SYMLINK libspdk_nvme.so 00:01:57.316 LIB libspdk_blob.a 00:01:57.316 SO libspdk_blob.so.11.0 00:01:57.575 SYMLINK libspdk_blob.so 00:01:57.832 CC lib/blobfs/blobfs.o 00:01:57.832 CC lib/lvol/lvol.o 00:01:57.832 CC lib/blobfs/tree.o 00:01:58.089 LIB libspdk_bdev.a 00:01:58.089 SO libspdk_bdev.so.15.1 00:01:58.347 SYMLINK libspdk_bdev.so 00:01:58.347 LIB libspdk_blobfs.a 00:01:58.347 SO libspdk_blobfs.so.10.0 00:01:58.605 LIB libspdk_lvol.a 00:01:58.605 SO libspdk_lvol.so.10.0 00:01:58.605 SYMLINK libspdk_blobfs.so 00:01:58.605 CC lib/ublk/ublk.o 00:01:58.605 CC lib/ublk/ublk_rpc.o 00:01:58.605 CC lib/nvmf/ctrlr_discovery.o 00:01:58.605 CC lib/nvmf/ctrlr.o 00:01:58.605 SYMLINK libspdk_lvol.so 00:01:58.605 CC lib/nvmf/ctrlr_bdev.o 00:01:58.605 CC lib/nvmf/nvmf_rpc.o 00:01:58.605 CC lib/nvmf/subsystem.o 00:01:58.605 CC lib/nvmf/nvmf.o 00:01:58.605 CC lib/nvmf/transport.o 00:01:58.605 CC lib/scsi/dev.o 00:01:58.605 CC lib/nvmf/tcp.o 00:01:58.605 CC lib/nvmf/stubs.o 00:01:58.605 CC lib/scsi/lun.o 00:01:58.605 CC lib/nvmf/mdns_server.o 00:01:58.605 CC lib/scsi/port.o 00:01:58.605 CC lib/nvmf/vfio_user.o 00:01:58.605 CC lib/ftl/ftl_core.o 00:01:58.605 CC lib/nvmf/rdma.o 00:01:58.605 CC lib/scsi/scsi.o 00:01:58.605 CC lib/ftl/ftl_init.o 00:01:58.605 CC lib/nbd/nbd.o 00:01:58.605 CC lib/nvmf/auth.o 00:01:58.605 CC lib/scsi/scsi_bdev.o 00:01:58.605 CC lib/ftl/ftl_layout.o 00:01:58.605 CC lib/nbd/nbd_rpc.o 00:01:58.605 CC lib/ftl/ftl_debug.o 00:01:58.605 CC lib/scsi/scsi_pr.o 00:01:58.605 CC lib/ftl/ftl_io.o 00:01:58.605 CC lib/scsi/scsi_rpc.o 00:01:58.605 CC lib/ftl/ftl_sb.o 00:01:58.605 CC lib/ftl/ftl_l2p.o 00:01:58.605 CC lib/scsi/task.o 00:01:58.605 CC lib/ftl/ftl_l2p_flat.o 00:01:58.605 CC lib/ftl/ftl_nv_cache.o 00:01:58.605 CC lib/ftl/ftl_band.o 00:01:58.605 CC lib/ftl/ftl_band_ops.o 00:01:58.605 CC lib/ftl/ftl_writer.o 00:01:58.605 CC lib/ftl/ftl_rq.o 00:01:58.605 CC lib/ftl/ftl_reloc.o 00:01:58.605 CC lib/ftl/ftl_l2p_cache.o 00:01:58.605 CC lib/ftl/ftl_p2l.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:58.605 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:58.605 CC lib/ftl/utils/ftl_conf.o 00:01:58.605 CC lib/ftl/utils/ftl_md.o 00:01:58.605 CC lib/ftl/utils/ftl_bitmap.o 00:01:58.605 CC lib/ftl/utils/ftl_mempool.o 00:01:58.605 CC lib/ftl/utils/ftl_property.o 00:01:58.605 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:58.605 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:58.605 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:58.605 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:58.605 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:58.605 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:58.605 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:58.605 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:58.605 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:58.605 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:58.605 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:58.605 CC lib/ftl/base/ftl_base_dev.o 00:01:58.605 CC lib/ftl/base/ftl_base_bdev.o 00:01:58.605 CC lib/ftl/ftl_trace.o 00:01:59.198 LIB libspdk_nbd.a 00:01:59.198 SO libspdk_nbd.so.7.0 00:01:59.198 LIB libspdk_scsi.a 00:01:59.198 SYMLINK libspdk_nbd.so 00:01:59.198 SO libspdk_scsi.so.9.0 00:01:59.490 SYMLINK libspdk_scsi.so 00:01:59.490 LIB libspdk_ublk.a 00:01:59.490 SO libspdk_ublk.so.3.0 00:01:59.490 SYMLINK libspdk_ublk.so 00:01:59.490 CC lib/vhost/vhost.o 00:01:59.490 CC lib/vhost/vhost_rpc.o 00:01:59.490 CC lib/iscsi/conn.o 00:01:59.490 CC lib/vhost/vhost_scsi.o 00:01:59.748 CC lib/iscsi/init_grp.o 00:01:59.748 CC lib/vhost/vhost_blk.o 00:01:59.748 CC lib/vhost/rte_vhost_user.o 00:01:59.748 CC lib/iscsi/iscsi.o 00:01:59.748 CC lib/iscsi/md5.o 00:01:59.748 CC lib/iscsi/param.o 00:01:59.748 CC lib/iscsi/portal_grp.o 00:01:59.748 CC lib/iscsi/tgt_node.o 00:01:59.748 CC lib/iscsi/iscsi_subsystem.o 00:01:59.748 CC lib/iscsi/iscsi_rpc.o 00:01:59.748 CC lib/iscsi/task.o 00:01:59.748 LIB libspdk_ftl.a 00:01:59.748 SO libspdk_ftl.so.9.0 00:02:00.006 SYMLINK libspdk_ftl.so 00:02:00.264 LIB libspdk_nvmf.a 00:02:00.264 SO libspdk_nvmf.so.18.1 00:02:00.522 LIB libspdk_vhost.a 00:02:00.522 SO libspdk_vhost.so.8.0 00:02:00.522 SYMLINK libspdk_nvmf.so 00:02:00.522 SYMLINK libspdk_vhost.so 00:02:00.522 LIB libspdk_iscsi.a 00:02:00.780 SO libspdk_iscsi.so.8.0 00:02:00.780 SYMLINK libspdk_iscsi.so 00:02:01.348 CC module/vfu_device/vfu_virtio_blk.o 00:02:01.348 CC module/vfu_device/vfu_virtio.o 00:02:01.348 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.348 CC module/vfu_device/vfu_virtio_scsi.o 00:02:01.348 CC module/vfu_device/vfu_virtio_rpc.o 00:02:01.348 CC module/blob/bdev/blob_bdev.o 00:02:01.348 LIB libspdk_env_dpdk_rpc.a 00:02:01.348 CC module/accel/error/accel_error.o 00:02:01.348 CC module/accel/error/accel_error_rpc.o 00:02:01.348 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.348 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.348 CC module/sock/posix/posix.o 00:02:01.348 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.348 CC module/keyring/file/keyring.o 00:02:01.348 CC module/keyring/linux/keyring.o 00:02:01.348 CC module/accel/dsa/accel_dsa.o 00:02:01.348 CC module/keyring/linux/keyring_rpc.o 00:02:01.348 CC module/keyring/file/keyring_rpc.o 00:02:01.348 CC module/accel/dsa/accel_dsa_rpc.o 00:02:01.348 CC module/accel/iaa/accel_iaa.o 00:02:01.348 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.348 CC module/accel/ioat/accel_ioat.o 00:02:01.606 CC module/accel/ioat/accel_ioat_rpc.o 00:02:01.606 SO libspdk_env_dpdk_rpc.so.6.0 00:02:01.606 SYMLINK libspdk_env_dpdk_rpc.so 00:02:01.606 LIB libspdk_scheduler_dpdk_governor.a 00:02:01.606 LIB libspdk_keyring_linux.a 00:02:01.606 LIB libspdk_scheduler_gscheduler.a 00:02:01.606 LIB libspdk_keyring_file.a 00:02:01.606 LIB libspdk_accel_error.a 00:02:01.606 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:01.606 SO libspdk_scheduler_gscheduler.so.4.0 00:02:01.606 LIB libspdk_scheduler_dynamic.a 00:02:01.606 SO libspdk_keyring_linux.so.1.0 00:02:01.606 SO libspdk_keyring_file.so.1.0 00:02:01.606 LIB libspdk_accel_ioat.a 00:02:01.606 SO libspdk_accel_error.so.2.0 00:02:01.606 LIB libspdk_accel_iaa.a 00:02:01.606 LIB libspdk_blob_bdev.a 00:02:01.606 SO libspdk_scheduler_dynamic.so.4.0 00:02:01.606 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:01.606 SO libspdk_accel_ioat.so.6.0 00:02:01.606 SYMLINK libspdk_scheduler_gscheduler.so 00:02:01.606 SYMLINK libspdk_keyring_linux.so 00:02:01.606 SO libspdk_blob_bdev.so.11.0 00:02:01.606 SYMLINK libspdk_keyring_file.so 00:02:01.606 SO libspdk_accel_iaa.so.3.0 00:02:01.606 LIB libspdk_accel_dsa.a 00:02:01.606 SYMLINK libspdk_accel_error.so 00:02:01.606 SYMLINK libspdk_scheduler_dynamic.so 00:02:01.863 SYMLINK libspdk_accel_ioat.so 00:02:01.863 SYMLINK libspdk_blob_bdev.so 00:02:01.863 SO libspdk_accel_dsa.so.5.0 00:02:01.863 SYMLINK libspdk_accel_iaa.so 00:02:01.863 SYMLINK libspdk_accel_dsa.so 00:02:01.863 LIB libspdk_vfu_device.a 00:02:01.863 SO libspdk_vfu_device.so.3.0 00:02:01.863 SYMLINK libspdk_vfu_device.so 00:02:02.122 LIB libspdk_sock_posix.a 00:02:02.122 SO libspdk_sock_posix.so.6.0 00:02:02.122 SYMLINK libspdk_sock_posix.so 00:02:02.122 CC module/bdev/gpt/gpt.o 00:02:02.122 CC module/bdev/gpt/vbdev_gpt.o 00:02:02.122 CC module/bdev/delay/vbdev_delay.o 00:02:02.122 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:02.122 CC module/bdev/error/vbdev_error.o 00:02:02.122 CC module/bdev/error/vbdev_error_rpc.o 00:02:02.122 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:02.122 CC module/bdev/lvol/vbdev_lvol.o 00:02:02.122 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:02.122 CC module/blobfs/bdev/blobfs_bdev.o 00:02:02.122 CC module/bdev/iscsi/bdev_iscsi.o 00:02:02.122 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:02.122 CC module/bdev/null/bdev_null.o 00:02:02.122 CC module/bdev/null/bdev_null_rpc.o 00:02:02.122 CC module/bdev/split/vbdev_split.o 00:02:02.122 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:02.122 CC module/bdev/nvme/bdev_nvme.o 00:02:02.122 CC module/bdev/split/vbdev_split_rpc.o 00:02:02.122 CC module/bdev/malloc/bdev_malloc.o 00:02:02.122 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:02.122 CC module/bdev/nvme/bdev_mdns_client.o 00:02:02.122 CC module/bdev/nvme/nvme_rpc.o 00:02:02.122 CC module/bdev/nvme/vbdev_opal.o 00:02:02.122 CC module/bdev/raid/bdev_raid.o 00:02:02.122 CC module/bdev/raid/bdev_raid_rpc.o 00:02:02.122 CC module/bdev/raid/bdev_raid_sb.o 00:02:02.122 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:02.122 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:02.122 CC module/bdev/raid/raid0.o 00:02:02.122 CC module/bdev/aio/bdev_aio.o 00:02:02.122 CC module/bdev/raid/raid1.o 00:02:02.122 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:02.122 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:02.122 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:02.122 CC module/bdev/raid/concat.o 00:02:02.122 CC module/bdev/aio/bdev_aio_rpc.o 00:02:02.122 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:02.122 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:02.122 CC module/bdev/passthru/vbdev_passthru.o 00:02:02.122 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:02.122 CC module/bdev/ftl/bdev_ftl.o 00:02:02.122 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:02.380 LIB libspdk_blobfs_bdev.a 00:02:02.380 SO libspdk_blobfs_bdev.so.6.0 00:02:02.380 LIB libspdk_bdev_split.a 00:02:02.380 LIB libspdk_bdev_gpt.a 00:02:02.380 LIB libspdk_bdev_null.a 00:02:02.380 SO libspdk_bdev_split.so.6.0 00:02:02.640 LIB libspdk_bdev_error.a 00:02:02.640 SO libspdk_bdev_gpt.so.6.0 00:02:02.640 SYMLINK libspdk_blobfs_bdev.so 00:02:02.640 LIB libspdk_bdev_passthru.a 00:02:02.640 SO libspdk_bdev_null.so.6.0 00:02:02.640 SO libspdk_bdev_error.so.6.0 00:02:02.640 LIB libspdk_bdev_zone_block.a 00:02:02.640 LIB libspdk_bdev_delay.a 00:02:02.640 LIB libspdk_bdev_ftl.a 00:02:02.640 LIB libspdk_bdev_aio.a 00:02:02.640 LIB libspdk_bdev_iscsi.a 00:02:02.640 SO libspdk_bdev_passthru.so.6.0 00:02:02.640 SYMLINK libspdk_bdev_split.so 00:02:02.640 SYMLINK libspdk_bdev_gpt.so 00:02:02.640 SO libspdk_bdev_zone_block.so.6.0 00:02:02.640 SYMLINK libspdk_bdev_error.so 00:02:02.640 SO libspdk_bdev_delay.so.6.0 00:02:02.640 SYMLINK libspdk_bdev_null.so 00:02:02.640 SO libspdk_bdev_ftl.so.6.0 00:02:02.640 SO libspdk_bdev_aio.so.6.0 00:02:02.640 SO libspdk_bdev_iscsi.so.6.0 00:02:02.640 LIB libspdk_bdev_malloc.a 00:02:02.640 SYMLINK libspdk_bdev_passthru.so 00:02:02.640 SYMLINK libspdk_bdev_zone_block.so 00:02:02.640 SYMLINK libspdk_bdev_ftl.so 00:02:02.640 SO libspdk_bdev_malloc.so.6.0 00:02:02.640 SYMLINK libspdk_bdev_delay.so 00:02:02.640 SYMLINK libspdk_bdev_aio.so 00:02:02.640 SYMLINK libspdk_bdev_iscsi.so 00:02:02.640 LIB libspdk_bdev_lvol.a 00:02:02.640 SO libspdk_bdev_lvol.so.6.0 00:02:02.640 SYMLINK libspdk_bdev_malloc.so 00:02:02.640 LIB libspdk_bdev_virtio.a 00:02:02.899 SO libspdk_bdev_virtio.so.6.0 00:02:02.899 SYMLINK libspdk_bdev_lvol.so 00:02:02.899 SYMLINK libspdk_bdev_virtio.so 00:02:02.899 LIB libspdk_bdev_raid.a 00:02:03.157 SO libspdk_bdev_raid.so.6.0 00:02:03.157 SYMLINK libspdk_bdev_raid.so 00:02:03.724 LIB libspdk_bdev_nvme.a 00:02:03.982 SO libspdk_bdev_nvme.so.7.0 00:02:03.982 SYMLINK libspdk_bdev_nvme.so 00:02:04.550 CC module/event/subsystems/iobuf/iobuf.o 00:02:04.550 CC module/event/subsystems/vmd/vmd.o 00:02:04.550 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:04.550 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:04.550 CC module/event/subsystems/sock/sock.o 00:02:04.550 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.550 CC module/event/subsystems/keyring/keyring.o 00:02:04.550 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:04.550 CC module/event/subsystems/scheduler/scheduler.o 00:02:04.808 LIB libspdk_event_keyring.a 00:02:04.808 LIB libspdk_event_sock.a 00:02:04.808 LIB libspdk_event_vfu_tgt.a 00:02:04.808 LIB libspdk_event_vmd.a 00:02:04.808 LIB libspdk_event_iobuf.a 00:02:04.808 LIB libspdk_event_vhost_blk.a 00:02:04.808 LIB libspdk_event_scheduler.a 00:02:04.808 SO libspdk_event_keyring.so.1.0 00:02:04.808 SO libspdk_event_sock.so.5.0 00:02:04.808 SO libspdk_event_vfu_tgt.so.3.0 00:02:04.808 SO libspdk_event_vhost_blk.so.3.0 00:02:04.808 SO libspdk_event_vmd.so.6.0 00:02:04.808 SO libspdk_event_iobuf.so.3.0 00:02:04.808 SO libspdk_event_scheduler.so.4.0 00:02:04.808 SYMLINK libspdk_event_keyring.so 00:02:04.808 SYMLINK libspdk_event_sock.so 00:02:04.808 SYMLINK libspdk_event_vfu_tgt.so 00:02:04.808 SYMLINK libspdk_event_vhost_blk.so 00:02:04.808 SYMLINK libspdk_event_scheduler.so 00:02:04.808 SYMLINK libspdk_event_iobuf.so 00:02:04.808 SYMLINK libspdk_event_vmd.so 00:02:05.067 CC module/event/subsystems/accel/accel.o 00:02:05.326 LIB libspdk_event_accel.a 00:02:05.326 SO libspdk_event_accel.so.6.0 00:02:05.326 SYMLINK libspdk_event_accel.so 00:02:05.894 CC module/event/subsystems/bdev/bdev.o 00:02:05.894 LIB libspdk_event_bdev.a 00:02:05.894 SO libspdk_event_bdev.so.6.0 00:02:05.894 SYMLINK libspdk_event_bdev.so 00:02:06.462 CC module/event/subsystems/ublk/ublk.o 00:02:06.462 CC module/event/subsystems/nbd/nbd.o 00:02:06.462 CC module/event/subsystems/scsi/scsi.o 00:02:06.462 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:06.462 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:06.462 LIB libspdk_event_ublk.a 00:02:06.462 LIB libspdk_event_nbd.a 00:02:06.462 LIB libspdk_event_scsi.a 00:02:06.462 SO libspdk_event_ublk.so.3.0 00:02:06.462 SO libspdk_event_nbd.so.6.0 00:02:06.462 SO libspdk_event_scsi.so.6.0 00:02:06.462 LIB libspdk_event_nvmf.a 00:02:06.462 SYMLINK libspdk_event_ublk.so 00:02:06.462 SYMLINK libspdk_event_nbd.so 00:02:06.462 SO libspdk_event_nvmf.so.6.0 00:02:06.462 SYMLINK libspdk_event_scsi.so 00:02:06.462 SYMLINK libspdk_event_nvmf.so 00:02:06.721 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.721 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.980 LIB libspdk_event_vhost_scsi.a 00:02:06.980 LIB libspdk_event_iscsi.a 00:02:06.980 SO libspdk_event_vhost_scsi.so.3.0 00:02:06.980 SO libspdk_event_iscsi.so.6.0 00:02:06.980 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.980 SYMLINK libspdk_event_iscsi.so 00:02:07.240 SO libspdk.so.6.0 00:02:07.240 SYMLINK libspdk.so 00:02:07.498 CC app/trace_record/trace_record.o 00:02:07.498 TEST_HEADER include/spdk/accel.h 00:02:07.498 CC app/spdk_top/spdk_top.o 00:02:07.498 TEST_HEADER include/spdk/accel_module.h 00:02:07.498 TEST_HEADER include/spdk/assert.h 00:02:07.498 TEST_HEADER include/spdk/barrier.h 00:02:07.498 TEST_HEADER include/spdk/base64.h 00:02:07.498 TEST_HEADER include/spdk/bdev.h 00:02:07.498 CXX app/trace/trace.o 00:02:07.498 TEST_HEADER include/spdk/bdev_module.h 00:02:07.498 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.498 TEST_HEADER include/spdk/bit_array.h 00:02:07.498 CC app/spdk_lspci/spdk_lspci.o 00:02:07.498 CC app/spdk_nvme_identify/identify.o 00:02:07.498 TEST_HEADER include/spdk/bit_pool.h 00:02:07.498 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.498 CC app/spdk_nvme_perf/perf.o 00:02:07.498 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.498 TEST_HEADER include/spdk/blobfs.h 00:02:07.498 TEST_HEADER include/spdk/blob.h 00:02:07.756 CC test/rpc_client/rpc_client_test.o 00:02:07.756 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.756 TEST_HEADER include/spdk/conf.h 00:02:07.756 TEST_HEADER include/spdk/config.h 00:02:07.756 TEST_HEADER include/spdk/cpuset.h 00:02:07.756 TEST_HEADER include/spdk/crc16.h 00:02:07.756 TEST_HEADER include/spdk/crc32.h 00:02:07.756 TEST_HEADER include/spdk/crc64.h 00:02:07.756 TEST_HEADER include/spdk/dma.h 00:02:07.756 TEST_HEADER include/spdk/dif.h 00:02:07.756 TEST_HEADER include/spdk/endian.h 00:02:07.757 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.757 TEST_HEADER include/spdk/env.h 00:02:07.757 TEST_HEADER include/spdk/fd_group.h 00:02:07.757 TEST_HEADER include/spdk/event.h 00:02:07.757 TEST_HEADER include/spdk/fd.h 00:02:07.757 TEST_HEADER include/spdk/file.h 00:02:07.757 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.757 TEST_HEADER include/spdk/ftl.h 00:02:07.757 TEST_HEADER include/spdk/hexlify.h 00:02:07.757 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.757 TEST_HEADER include/spdk/histogram_data.h 00:02:07.757 TEST_HEADER include/spdk/idxd.h 00:02:07.757 TEST_HEADER include/spdk/init.h 00:02:07.757 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.757 TEST_HEADER include/spdk/ioat.h 00:02:07.757 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.757 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.757 TEST_HEADER include/spdk/json.h 00:02:07.757 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.757 TEST_HEADER include/spdk/keyring.h 00:02:07.757 TEST_HEADER include/spdk/keyring_module.h 00:02:07.757 TEST_HEADER include/spdk/likely.h 00:02:07.757 TEST_HEADER include/spdk/lvol.h 00:02:07.757 TEST_HEADER include/spdk/memory.h 00:02:07.757 TEST_HEADER include/spdk/log.h 00:02:07.757 TEST_HEADER include/spdk/nbd.h 00:02:07.757 TEST_HEADER include/spdk/mmio.h 00:02:07.757 TEST_HEADER include/spdk/notify.h 00:02:07.757 TEST_HEADER include/spdk/nvme.h 00:02:07.757 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.757 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.757 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.757 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.757 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.757 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.757 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.757 TEST_HEADER include/spdk/nvmf.h 00:02:07.757 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.757 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.757 TEST_HEADER include/spdk/opal_spec.h 00:02:07.757 TEST_HEADER include/spdk/pci_ids.h 00:02:07.757 CC app/spdk_dd/spdk_dd.o 00:02:07.757 TEST_HEADER include/spdk/opal.h 00:02:07.757 TEST_HEADER include/spdk/pipe.h 00:02:07.757 TEST_HEADER include/spdk/queue.h 00:02:07.757 TEST_HEADER include/spdk/reduce.h 00:02:07.757 TEST_HEADER include/spdk/rpc.h 00:02:07.757 TEST_HEADER include/spdk/scheduler.h 00:02:07.757 TEST_HEADER include/spdk/scsi.h 00:02:07.757 CC app/nvmf_tgt/nvmf_main.o 00:02:07.757 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.757 TEST_HEADER include/spdk/sock.h 00:02:07.757 TEST_HEADER include/spdk/stdinc.h 00:02:07.757 TEST_HEADER include/spdk/string.h 00:02:07.757 TEST_HEADER include/spdk/trace.h 00:02:07.757 TEST_HEADER include/spdk/thread.h 00:02:07.757 TEST_HEADER include/spdk/tree.h 00:02:07.757 TEST_HEADER include/spdk/trace_parser.h 00:02:07.757 TEST_HEADER include/spdk/ublk.h 00:02:07.757 TEST_HEADER include/spdk/util.h 00:02:07.757 TEST_HEADER include/spdk/uuid.h 00:02:07.757 TEST_HEADER include/spdk/version.h 00:02:07.757 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.757 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.757 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.757 TEST_HEADER include/spdk/vmd.h 00:02:07.757 TEST_HEADER include/spdk/zipf.h 00:02:07.757 TEST_HEADER include/spdk/vhost.h 00:02:07.757 TEST_HEADER include/spdk/xor.h 00:02:07.757 CXX test/cpp_headers/accel_module.o 00:02:07.757 CXX test/cpp_headers/accel.o 00:02:07.757 CXX test/cpp_headers/assert.o 00:02:07.757 CXX test/cpp_headers/barrier.o 00:02:07.757 CXX test/cpp_headers/base64.o 00:02:07.757 CXX test/cpp_headers/bdev.o 00:02:07.757 CXX test/cpp_headers/bdev_zone.o 00:02:07.757 CXX test/cpp_headers/bdev_module.o 00:02:07.757 CXX test/cpp_headers/blob_bdev.o 00:02:07.757 CXX test/cpp_headers/bit_array.o 00:02:07.757 CXX test/cpp_headers/bit_pool.o 00:02:07.757 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.757 CXX test/cpp_headers/blobfs.o 00:02:07.757 CXX test/cpp_headers/blob.o 00:02:07.757 CXX test/cpp_headers/cpuset.o 00:02:07.757 CC app/spdk_tgt/spdk_tgt.o 00:02:07.757 CXX test/cpp_headers/config.o 00:02:07.757 CXX test/cpp_headers/conf.o 00:02:07.757 CXX test/cpp_headers/crc16.o 00:02:07.757 CXX test/cpp_headers/crc32.o 00:02:07.757 CXX test/cpp_headers/dif.o 00:02:07.757 CXX test/cpp_headers/crc64.o 00:02:07.757 CXX test/cpp_headers/env_dpdk.o 00:02:07.757 CXX test/cpp_headers/dma.o 00:02:07.757 CXX test/cpp_headers/endian.o 00:02:07.757 CXX test/cpp_headers/env.o 00:02:07.757 CXX test/cpp_headers/fd_group.o 00:02:07.757 CXX test/cpp_headers/event.o 00:02:07.757 CXX test/cpp_headers/file.o 00:02:07.757 CXX test/cpp_headers/ftl.o 00:02:07.757 CXX test/cpp_headers/fd.o 00:02:07.757 CXX test/cpp_headers/gpt_spec.o 00:02:07.757 CXX test/cpp_headers/hexlify.o 00:02:07.757 CXX test/cpp_headers/idxd.o 00:02:07.757 CXX test/cpp_headers/histogram_data.o 00:02:07.757 CXX test/cpp_headers/idxd_spec.o 00:02:07.757 CXX test/cpp_headers/ioat.o 00:02:07.757 CXX test/cpp_headers/init.o 00:02:07.757 CXX test/cpp_headers/ioat_spec.o 00:02:07.757 CXX test/cpp_headers/json.o 00:02:07.757 CXX test/cpp_headers/iscsi_spec.o 00:02:07.757 CXX test/cpp_headers/jsonrpc.o 00:02:07.757 CXX test/cpp_headers/keyring.o 00:02:07.757 CXX test/cpp_headers/keyring_module.o 00:02:07.757 CXX test/cpp_headers/likely.o 00:02:07.757 CXX test/cpp_headers/lvol.o 00:02:07.757 CXX test/cpp_headers/log.o 00:02:07.757 CXX test/cpp_headers/memory.o 00:02:07.757 CXX test/cpp_headers/mmio.o 00:02:07.757 CXX test/cpp_headers/nbd.o 00:02:07.757 CXX test/cpp_headers/notify.o 00:02:07.757 CXX test/cpp_headers/nvme.o 00:02:07.757 CXX test/cpp_headers/nvme_intel.o 00:02:07.757 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.757 CXX test/cpp_headers/nvme_spec.o 00:02:07.757 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.757 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.757 CXX test/cpp_headers/nvme_zns.o 00:02:07.757 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.757 CXX test/cpp_headers/nvmf.o 00:02:07.757 CXX test/cpp_headers/nvmf_transport.o 00:02:07.757 CXX test/cpp_headers/nvmf_spec.o 00:02:07.757 CXX test/cpp_headers/opal.o 00:02:07.757 CXX test/cpp_headers/pci_ids.o 00:02:07.757 CXX test/cpp_headers/opal_spec.o 00:02:07.757 CXX test/cpp_headers/pipe.o 00:02:07.757 CXX test/cpp_headers/queue.o 00:02:07.757 CC examples/util/zipf/zipf.o 00:02:07.757 CXX test/cpp_headers/reduce.o 00:02:07.757 CC examples/ioat/perf/perf.o 00:02:07.757 CC test/thread/poller_perf/poller_perf.o 00:02:07.757 CC test/env/memory/memory_ut.o 00:02:07.757 CC examples/ioat/verify/verify.o 00:02:07.757 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.757 CXX test/cpp_headers/rpc.o 00:02:07.757 CC test/env/pci/pci_ut.o 00:02:07.757 CC test/env/vtophys/vtophys.o 00:02:07.757 CC app/fio/nvme/fio_plugin.o 00:02:07.757 CC test/app/jsoncat/jsoncat.o 00:02:08.033 CC test/app/stub/stub.o 00:02:08.033 CC test/app/histogram_perf/histogram_perf.o 00:02:08.033 CXX test/cpp_headers/scheduler.o 00:02:08.033 CC test/dma/test_dma/test_dma.o 00:02:08.033 CC app/fio/bdev/fio_plugin.o 00:02:08.033 CC test/app/bdev_svc/bdev_svc.o 00:02:08.033 LINK spdk_lspci 00:02:08.033 LINK interrupt_tgt 00:02:08.033 LINK spdk_trace_record 00:02:08.296 LINK nvmf_tgt 00:02:08.296 LINK spdk_nvme_discover 00:02:08.296 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.296 LINK rpc_client_test 00:02:08.296 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.296 LINK poller_perf 00:02:08.296 LINK spdk_tgt 00:02:08.296 CXX test/cpp_headers/scsi.o 00:02:08.296 LINK iscsi_tgt 00:02:08.296 CXX test/cpp_headers/scsi_spec.o 00:02:08.296 LINK vtophys 00:02:08.296 CXX test/cpp_headers/sock.o 00:02:08.296 CXX test/cpp_headers/stdinc.o 00:02:08.296 CXX test/cpp_headers/string.o 00:02:08.296 CXX test/cpp_headers/thread.o 00:02:08.296 LINK env_dpdk_post_init 00:02:08.296 CXX test/cpp_headers/trace.o 00:02:08.296 CXX test/cpp_headers/trace_parser.o 00:02:08.296 CXX test/cpp_headers/tree.o 00:02:08.296 CXX test/cpp_headers/ublk.o 00:02:08.296 CXX test/cpp_headers/util.o 00:02:08.296 CXX test/cpp_headers/uuid.o 00:02:08.296 CXX test/cpp_headers/version.o 00:02:08.296 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.296 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.296 LINK zipf 00:02:08.296 CXX test/cpp_headers/vhost.o 00:02:08.296 CXX test/cpp_headers/vmd.o 00:02:08.296 CXX test/cpp_headers/xor.o 00:02:08.296 CXX test/cpp_headers/zipf.o 00:02:08.296 LINK jsoncat 00:02:08.296 LINK histogram_perf 00:02:08.580 LINK stub 00:02:08.580 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.580 LINK spdk_dd 00:02:08.580 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.580 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.580 LINK ioat_perf 00:02:08.580 LINK verify 00:02:08.580 LINK bdev_svc 00:02:08.580 LINK spdk_trace 00:02:08.580 LINK pci_ut 00:02:08.580 LINK test_dma 00:02:08.580 LINK spdk_nvme 00:02:08.839 CC test/event/event_perf/event_perf.o 00:02:08.839 CC test/event/reactor/reactor.o 00:02:08.839 CC test/event/reactor_perf/reactor_perf.o 00:02:08.839 LINK spdk_nvme_perf 00:02:08.839 CC test/event/app_repeat/app_repeat.o 00:02:08.839 LINK nvme_fuzz 00:02:08.839 LINK spdk_bdev 00:02:08.839 CC test/event/scheduler/scheduler.o 00:02:08.839 CC examples/vmd/led/led.o 00:02:08.839 LINK spdk_nvme_identify 00:02:08.839 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.839 CC examples/sock/hello_world/hello_sock.o 00:02:08.839 CC examples/thread/thread/thread_ex.o 00:02:08.839 CC examples/idxd/perf/perf.o 00:02:08.839 LINK vhost_fuzz 00:02:08.839 LINK reactor 00:02:08.839 LINK spdk_top 00:02:08.839 LINK event_perf 00:02:08.839 LINK reactor_perf 00:02:08.839 LINK mem_callbacks 00:02:08.839 CC app/vhost/vhost.o 00:02:08.839 LINK app_repeat 00:02:09.097 LINK lsvmd 00:02:09.097 LINK led 00:02:09.097 LINK scheduler 00:02:09.097 LINK hello_sock 00:02:09.097 LINK thread 00:02:09.097 CC test/nvme/fused_ordering/fused_ordering.o 00:02:09.097 CC test/nvme/aer/aer.o 00:02:09.097 CC test/nvme/reset/reset.o 00:02:09.097 CC test/nvme/e2edp/nvme_dp.o 00:02:09.097 CC test/nvme/compliance/nvme_compliance.o 00:02:09.097 CC test/nvme/connect_stress/connect_stress.o 00:02:09.097 CC test/nvme/sgl/sgl.o 00:02:09.097 CC test/nvme/cuse/cuse.o 00:02:09.097 CC test/nvme/overhead/overhead.o 00:02:09.097 LINK idxd_perf 00:02:09.097 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:09.097 CC test/nvme/err_injection/err_injection.o 00:02:09.097 CC test/nvme/simple_copy/simple_copy.o 00:02:09.097 CC test/nvme/fdp/fdp.o 00:02:09.097 CC test/nvme/reserve/reserve.o 00:02:09.097 CC test/nvme/startup/startup.o 00:02:09.097 CC test/nvme/boot_partition/boot_partition.o 00:02:09.097 LINK memory_ut 00:02:09.097 CC test/accel/dif/dif.o 00:02:09.097 LINK vhost 00:02:09.097 CC test/blobfs/mkfs/mkfs.o 00:02:09.355 CC test/lvol/esnap/esnap.o 00:02:09.355 LINK boot_partition 00:02:09.355 LINK connect_stress 00:02:09.355 LINK startup 00:02:09.355 LINK fused_ordering 00:02:09.355 LINK doorbell_aers 00:02:09.355 LINK err_injection 00:02:09.355 LINK reserve 00:02:09.355 LINK reset 00:02:09.355 LINK simple_copy 00:02:09.355 LINK mkfs 00:02:09.355 LINK sgl 00:02:09.355 LINK nvme_dp 00:02:09.355 LINK overhead 00:02:09.355 LINK aer 00:02:09.355 LINK nvme_compliance 00:02:09.355 LINK fdp 00:02:09.613 CC examples/nvme/hotplug/hotplug.o 00:02:09.613 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.613 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.613 CC examples/nvme/arbitration/arbitration.o 00:02:09.613 CC examples/nvme/reconnect/reconnect.o 00:02:09.613 CC examples/nvme/hello_world/hello_world.o 00:02:09.613 CC examples/nvme/abort/abort.o 00:02:09.613 LINK dif 00:02:09.613 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.613 CC examples/accel/perf/accel_perf.o 00:02:09.613 CC examples/blob/cli/blobcli.o 00:02:09.613 CC examples/blob/hello_world/hello_blob.o 00:02:09.613 LINK cmb_copy 00:02:09.613 LINK pmr_persistence 00:02:09.613 LINK hello_world 00:02:09.613 LINK hotplug 00:02:09.871 LINK arbitration 00:02:09.871 LINK iscsi_fuzz 00:02:09.871 LINK abort 00:02:09.871 LINK reconnect 00:02:09.871 LINK hello_blob 00:02:09.871 LINK nvme_manage 00:02:09.871 LINK accel_perf 00:02:09.871 LINK blobcli 00:02:10.130 CC test/bdev/bdevio/bdevio.o 00:02:10.130 LINK cuse 00:02:10.388 LINK bdevio 00:02:10.388 CC examples/bdev/hello_world/hello_bdev.o 00:02:10.388 CC examples/bdev/bdevperf/bdevperf.o 00:02:10.647 LINK hello_bdev 00:02:10.906 LINK bdevperf 00:02:11.473 CC examples/nvmf/nvmf/nvmf.o 00:02:11.731 LINK nvmf 00:02:12.668 LINK esnap 00:02:12.927 00:02:12.927 real 0m45.005s 00:02:12.927 user 6m34.027s 00:02:12.927 sys 3m25.529s 00:02:12.927 12:36:43 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.927 12:36:43 make -- common/autotest_common.sh@10 -- $ set +x 00:02:12.927 ************************************ 00:02:12.927 END TEST make 00:02:12.927 ************************************ 00:02:12.927 12:36:43 -- common/autotest_common.sh@1142 -- $ return 0 00:02:12.927 12:36:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:12.927 12:36:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:12.927 12:36:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:12.927 12:36:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.927 12:36:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:12.927 12:36:43 -- pm/common@44 -- $ pid=1422742 00:02:12.927 12:36:43 -- pm/common@50 -- $ kill -TERM 1422742 00:02:12.927 12:36:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.927 12:36:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:12.927 12:36:43 -- pm/common@44 -- $ pid=1422743 00:02:12.927 12:36:43 -- pm/common@50 -- $ kill -TERM 1422743 00:02:12.927 12:36:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.927 12:36:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:12.927 12:36:43 -- pm/common@44 -- $ pid=1422745 00:02:12.927 12:36:43 -- pm/common@50 -- $ kill -TERM 1422745 00:02:12.927 12:36:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.927 12:36:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:12.927 12:36:43 -- pm/common@44 -- $ pid=1422768 00:02:12.927 12:36:43 -- pm/common@50 -- $ sudo -E kill -TERM 1422768 00:02:13.186 12:36:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:13.186 12:36:43 -- nvmf/common.sh@7 -- # uname -s 00:02:13.186 12:36:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:13.186 12:36:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:13.186 12:36:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:13.186 12:36:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:13.186 12:36:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:13.186 12:36:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:13.186 12:36:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:13.186 12:36:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:13.186 12:36:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:13.186 12:36:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:13.186 12:36:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:13.186 12:36:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:13.186 12:36:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:13.186 12:36:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:13.186 12:36:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:13.186 12:36:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:13.186 12:36:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:13.186 12:36:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:13.186 12:36:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:13.186 12:36:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:13.186 12:36:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.186 12:36:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.186 12:36:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.186 12:36:43 -- paths/export.sh@5 -- # export PATH 00:02:13.186 12:36:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:13.186 12:36:43 -- nvmf/common.sh@47 -- # : 0 00:02:13.186 12:36:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:13.186 12:36:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:13.186 12:36:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:13.186 12:36:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:13.186 12:36:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:13.186 12:36:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:13.186 12:36:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:13.186 12:36:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:13.186 12:36:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:13.186 12:36:43 -- spdk/autotest.sh@32 -- # uname -s 00:02:13.186 12:36:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:13.186 12:36:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:13.186 12:36:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:13.186 12:36:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:13.186 12:36:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:13.186 12:36:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:13.186 12:36:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:13.186 12:36:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:13.186 12:36:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1482492 00:02:13.186 12:36:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:13.186 12:36:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:13.186 12:36:43 -- pm/common@17 -- # local monitor 00:02:13.186 12:36:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.186 12:36:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.186 12:36:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.186 12:36:43 -- pm/common@21 -- # date +%s 00:02:13.186 12:36:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:13.186 12:36:43 -- pm/common@21 -- # date +%s 00:02:13.186 12:36:43 -- pm/common@25 -- # sleep 1 00:02:13.186 12:36:43 -- pm/common@21 -- # date +%s 00:02:13.186 12:36:43 -- pm/common@21 -- # date +%s 00:02:13.187 12:36:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039803 00:02:13.187 12:36:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039803 00:02:13.187 12:36:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039803 00:02:13.187 12:36:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721039803 00:02:13.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039803_collect-vmstat.pm.log 00:02:13.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039803_collect-cpu-load.pm.log 00:02:13.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039803_collect-cpu-temp.pm.log 00:02:13.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721039803_collect-bmc-pm.bmc.pm.log 00:02:14.124 12:36:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:14.124 12:36:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:14.124 12:36:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:14.124 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:02:14.124 12:36:44 -- spdk/autotest.sh@59 -- # create_test_list 00:02:14.124 12:36:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:14.124 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:02:14.124 12:36:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:14.124 12:36:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.124 12:36:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.124 12:36:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:14.124 12:36:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.124 12:36:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:14.124 12:36:45 -- common/autotest_common.sh@1455 -- # uname 00:02:14.124 12:36:45 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:14.124 12:36:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:14.124 12:36:45 -- common/autotest_common.sh@1475 -- # uname 00:02:14.124 12:36:45 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:14.124 12:36:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:14.124 12:36:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:14.124 12:36:45 -- spdk/autotest.sh@72 -- # hash lcov 00:02:14.124 12:36:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:14.124 12:36:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:14.124 --rc lcov_branch_coverage=1 00:02:14.124 --rc lcov_function_coverage=1 00:02:14.124 --rc genhtml_branch_coverage=1 00:02:14.124 --rc genhtml_function_coverage=1 00:02:14.124 --rc genhtml_legend=1 00:02:14.124 --rc geninfo_all_blocks=1 00:02:14.124 ' 00:02:14.124 12:36:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:14.124 --rc lcov_branch_coverage=1 00:02:14.124 --rc lcov_function_coverage=1 00:02:14.124 --rc genhtml_branch_coverage=1 00:02:14.124 --rc genhtml_function_coverage=1 00:02:14.124 --rc genhtml_legend=1 00:02:14.124 --rc geninfo_all_blocks=1 00:02:14.124 ' 00:02:14.124 12:36:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:14.124 --rc lcov_branch_coverage=1 00:02:14.124 --rc lcov_function_coverage=1 00:02:14.124 --rc genhtml_branch_coverage=1 00:02:14.124 --rc genhtml_function_coverage=1 00:02:14.124 --rc genhtml_legend=1 00:02:14.124 --rc geninfo_all_blocks=1 00:02:14.124 --no-external' 00:02:14.124 12:36:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:14.124 --rc lcov_branch_coverage=1 00:02:14.124 --rc lcov_function_coverage=1 00:02:14.124 --rc genhtml_branch_coverage=1 00:02:14.124 --rc genhtml_function_coverage=1 00:02:14.124 --rc genhtml_legend=1 00:02:14.124 --rc geninfo_all_blocks=1 00:02:14.124 --no-external' 00:02:14.124 12:36:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:14.383 lcov: LCOV version 1.14 00:02:14.383 12:36:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:26.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:26.589 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:36.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:36.569 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:36.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:36.570 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:36.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:36.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:39.102 12:37:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:39.102 12:37:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:39.102 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:02:39.102 12:37:09 -- spdk/autotest.sh@91 -- # rm -f 00:02:39.102 12:37:09 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.636 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:41.636 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:41.636 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:41.895 12:37:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:41.895 12:37:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:41.895 12:37:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:41.895 12:37:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:41.895 12:37:12 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:41.895 12:37:12 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:41.895 12:37:12 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:41.895 12:37:12 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:41.895 12:37:12 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:41.895 12:37:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:41.895 12:37:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:41.895 12:37:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:41.895 12:37:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:41.895 12:37:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:41.895 12:37:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:41.895 No valid GPT data, bailing 00:02:41.895 12:37:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:41.895 12:37:12 -- scripts/common.sh@391 -- # pt= 00:02:41.895 12:37:12 -- scripts/common.sh@392 -- # return 1 00:02:41.895 12:37:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:41.895 1+0 records in 00:02:41.895 1+0 records out 00:02:41.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450204 s, 233 MB/s 00:02:41.895 12:37:12 -- spdk/autotest.sh@118 -- # sync 00:02:41.895 12:37:12 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:41.895 12:37:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:41.895 12:37:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:47.181 12:37:17 -- spdk/autotest.sh@124 -- # uname -s 00:02:47.181 12:37:17 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:47.181 12:37:17 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:47.181 12:37:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.181 12:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.181 12:37:17 -- common/autotest_common.sh@10 -- # set +x 00:02:47.181 ************************************ 00:02:47.181 START TEST setup.sh 00:02:47.181 ************************************ 00:02:47.181 12:37:17 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:47.181 * Looking for test storage... 00:02:47.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:47.181 12:37:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:47.181 12:37:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:47.181 12:37:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:47.181 12:37:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.181 12:37:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.181 12:37:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:47.181 ************************************ 00:02:47.181 START TEST acl 00:02:47.181 ************************************ 00:02:47.181 12:37:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:47.440 * Looking for test storage... 00:02:47.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:47.440 12:37:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.440 12:37:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:47.440 12:37:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:47.440 12:37:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:47.440 12:37:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:47.440 12:37:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:47.440 12:37:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:47.440 12:37:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.440 12:37:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.730 12:37:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:50.730 12:37:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:50.730 12:37:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.730 12:37:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:50.730 12:37:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.730 12:37:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:53.268 Hugepages 00:02:53.268 node hugesize free / total 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 00:02:53.268 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.268 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:53.528 12:37:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:53.528 12:37:24 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.528 12:37:24 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.528 12:37:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.528 ************************************ 00:02:53.528 START TEST denied 00:02:53.528 ************************************ 00:02:53.528 12:37:24 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:53.528 12:37:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:53.528 12:37:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:53.528 12:37:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:53.528 12:37:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.528 12:37:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:56.864 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.864 12:37:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.057 00:03:01.057 real 0m7.140s 00:03:01.057 user 0m2.369s 00:03:01.057 sys 0m4.022s 00:03:01.057 12:37:31 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.057 12:37:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:01.057 ************************************ 00:03:01.057 END TEST denied 00:03:01.057 ************************************ 00:03:01.057 12:37:31 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:01.057 12:37:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:01.057 12:37:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.057 12:37:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.057 12:37:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:01.057 ************************************ 00:03:01.057 START TEST allowed 00:03:01.057 ************************************ 00:03:01.057 12:37:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:01.057 12:37:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:01.057 12:37:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:01.057 12:37:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:01.057 12:37:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.057 12:37:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.271 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.271 12:37:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:05.271 12:37:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:05.271 12:37:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:05.271 12:37:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.271 12:37:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.808 00:03:07.808 real 0m7.035s 00:03:07.808 user 0m2.216s 00:03:07.808 sys 0m3.997s 00:03:07.808 12:37:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.808 12:37:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:07.808 ************************************ 00:03:07.808 END TEST allowed 00:03:07.808 ************************************ 00:03:07.808 12:37:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:07.808 00:03:07.808 real 0m20.491s 00:03:07.808 user 0m6.987s 00:03:07.808 sys 0m12.143s 00:03:07.808 12:37:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.808 12:37:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.808 ************************************ 00:03:07.808 END TEST acl 00:03:07.808 ************************************ 00:03:07.808 12:37:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:07.808 12:37:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:07.808 12:37:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.808 12:37:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.808 12:37:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.808 ************************************ 00:03:07.808 START TEST hugepages 00:03:07.808 ************************************ 00:03:07.808 12:37:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:07.808 * Looking for test storage... 00:03:08.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173254920 kB' 'MemAvailable: 176126424 kB' 'Buffers: 3896 kB' 'Cached: 10203844 kB' 'SwapCached: 0 kB' 'Active: 7216472 kB' 'Inactive: 3507356 kB' 'Active(anon): 6824464 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519352 kB' 'Mapped: 181332 kB' 'Shmem: 6308376 kB' 'KReclaimable: 233184 kB' 'Slab: 800448 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 567264 kB' 'KernelStack: 20752 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8336084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.069 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.070 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.071 12:37:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:08.071 12:37:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.071 12:37:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.071 12:37:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.071 ************************************ 00:03:08.071 START TEST default_setup 00:03:08.071 ************************************ 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.071 12:37:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.363 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.363 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.937 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175384044 kB' 'MemAvailable: 178255548 kB' 'Buffers: 3896 kB' 'Cached: 10203956 kB' 'SwapCached: 0 kB' 'Active: 7232740 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840732 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535464 kB' 'Mapped: 180936 kB' 'Shmem: 6308488 kB' 'KReclaimable: 233184 kB' 'Slab: 798956 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565772 kB' 'KernelStack: 20752 kB' 'PageTables: 9600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8353828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.937 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.938 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175383936 kB' 'MemAvailable: 178255440 kB' 'Buffers: 3896 kB' 'Cached: 10203960 kB' 'SwapCached: 0 kB' 'Active: 7231868 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839860 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534808 kB' 'Mapped: 180924 kB' 'Shmem: 6308492 kB' 'KReclaimable: 233184 kB' 'Slab: 799056 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565872 kB' 'KernelStack: 20752 kB' 'PageTables: 9424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8353848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.939 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175382520 kB' 'MemAvailable: 178254024 kB' 'Buffers: 3896 kB' 'Cached: 10203980 kB' 'SwapCached: 0 kB' 'Active: 7232012 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840004 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534940 kB' 'Mapped: 180924 kB' 'Shmem: 6308512 kB' 'KReclaimable: 233184 kB' 'Slab: 799056 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565872 kB' 'KernelStack: 20768 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8353868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.940 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.941 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.942 nr_hugepages=1024 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.942 resv_hugepages=0 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.942 surplus_hugepages=0 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.942 anon_hugepages=0 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175383948 kB' 'MemAvailable: 178255452 kB' 'Buffers: 3896 kB' 'Cached: 10204000 kB' 'SwapCached: 0 kB' 'Active: 7231884 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839876 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534852 kB' 'Mapped: 180924 kB' 'Shmem: 6308532 kB' 'KReclaimable: 233184 kB' 'Slab: 799056 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565872 kB' 'KernelStack: 20560 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8353892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.942 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.943 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86195600 kB' 'MemUsed: 11467084 kB' 'SwapCached: 0 kB' 'Active: 4887680 kB' 'Inactive: 3338124 kB' 'Active(anon): 4730140 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8036924 kB' 'Mapped: 72916 kB' 'AnonPages: 192220 kB' 'Shmem: 4541260 kB' 'KernelStack: 11128 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 383620 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 258316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.944 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.945 node0=1024 expecting 1024 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.945 00:03:11.945 real 0m4.026s 00:03:11.945 user 0m1.306s 00:03:11.945 sys 0m1.971s 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.945 12:37:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:11.945 ************************************ 00:03:11.945 END TEST default_setup 00:03:11.945 ************************************ 00:03:12.205 12:37:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:12.205 12:37:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:12.205 12:37:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.205 12:37:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.205 12:37:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.205 ************************************ 00:03:12.205 START TEST per_node_1G_alloc 00:03:12.205 ************************************ 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.205 12:37:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.740 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.740 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.740 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.003 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175398732 kB' 'MemAvailable: 178270236 kB' 'Buffers: 3896 kB' 'Cached: 10204096 kB' 'SwapCached: 0 kB' 'Active: 7231912 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839904 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534600 kB' 'Mapped: 180944 kB' 'Shmem: 6308628 kB' 'KReclaimable: 233184 kB' 'Slab: 798552 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565368 kB' 'KernelStack: 20560 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8351872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175398480 kB' 'MemAvailable: 178269984 kB' 'Buffers: 3896 kB' 'Cached: 10204100 kB' 'SwapCached: 0 kB' 'Active: 7232128 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840120 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534828 kB' 'Mapped: 180936 kB' 'Shmem: 6308632 kB' 'KReclaimable: 233184 kB' 'Slab: 798560 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565376 kB' 'KernelStack: 20560 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8351892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175399312 kB' 'MemAvailable: 178270816 kB' 'Buffers: 3896 kB' 'Cached: 10204116 kB' 'SwapCached: 0 kB' 'Active: 7232128 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840120 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534836 kB' 'Mapped: 180936 kB' 'Shmem: 6308648 kB' 'KReclaimable: 233184 kB' 'Slab: 798560 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565376 kB' 'KernelStack: 20560 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8351916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.007 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.009 nr_hugepages=1024 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.009 resv_hugepages=0 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.009 surplus_hugepages=0 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.009 anon_hugepages=0 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175399312 kB' 'MemAvailable: 178270816 kB' 'Buffers: 3896 kB' 'Cached: 10204160 kB' 'SwapCached: 0 kB' 'Active: 7231816 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839808 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534448 kB' 'Mapped: 180936 kB' 'Shmem: 6308692 kB' 'KReclaimable: 233184 kB' 'Slab: 798560 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565376 kB' 'KernelStack: 20544 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8351936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.009 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.011 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87260112 kB' 'MemUsed: 10402572 kB' 'SwapCached: 0 kB' 'Active: 4887416 kB' 'Inactive: 3338124 kB' 'Active(anon): 4729876 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037048 kB' 'Mapped: 72912 kB' 'AnonPages: 191760 kB' 'Shmem: 4541384 kB' 'KernelStack: 11144 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 383196 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 257892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.273 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88139452 kB' 'MemUsed: 5579016 kB' 'SwapCached: 0 kB' 'Active: 2344752 kB' 'Inactive: 169232 kB' 'Active(anon): 2110284 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169232 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2171012 kB' 'Mapped: 108024 kB' 'AnonPages: 343032 kB' 'Shmem: 1767312 kB' 'KernelStack: 9400 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107880 kB' 'Slab: 415364 kB' 'SReclaimable: 107880 kB' 'SUnreclaim: 307484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.274 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.275 node0=512 expecting 512 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.275 node1=512 expecting 512 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.275 00:03:15.275 real 0m3.052s 00:03:15.275 user 0m1.249s 00:03:15.275 sys 0m1.872s 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.275 12:37:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.275 ************************************ 00:03:15.275 END TEST per_node_1G_alloc 00:03:15.275 ************************************ 00:03:15.275 12:37:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.275 12:37:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:15.275 12:37:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.275 12:37:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.276 12:37:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.276 ************************************ 00:03:15.276 START TEST even_2G_alloc 00:03:15.276 ************************************ 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.276 12:37:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.813 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.813 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.813 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.813 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.813 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.074 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.074 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175398656 kB' 'MemAvailable: 178270160 kB' 'Buffers: 3896 kB' 'Cached: 10204244 kB' 'SwapCached: 0 kB' 'Active: 7230712 kB' 'Inactive: 3507356 kB' 'Active(anon): 6838704 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532704 kB' 'Mapped: 179960 kB' 'Shmem: 6308776 kB' 'KReclaimable: 233184 kB' 'Slab: 798548 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565364 kB' 'KernelStack: 20512 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8340864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175399772 kB' 'MemAvailable: 178271276 kB' 'Buffers: 3896 kB' 'Cached: 10204248 kB' 'SwapCached: 0 kB' 'Active: 7232184 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840176 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534576 kB' 'Mapped: 180388 kB' 'Shmem: 6308780 kB' 'KReclaimable: 233184 kB' 'Slab: 798532 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565348 kB' 'KernelStack: 20480 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8343556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.077 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.078 12:37:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175399428 kB' 'MemAvailable: 178270932 kB' 'Buffers: 3896 kB' 'Cached: 10204264 kB' 'SwapCached: 0 kB' 'Active: 7235876 kB' 'Inactive: 3507356 kB' 'Active(anon): 6843868 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538320 kB' 'Mapped: 180696 kB' 'Shmem: 6308796 kB' 'KReclaimable: 233184 kB' 'Slab: 798532 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565348 kB' 'KernelStack: 20512 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8347020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.341 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.342 nr_hugepages=1024 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.342 resv_hugepages=0 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.342 surplus_hugepages=0 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.342 anon_hugepages=0 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175404152 kB' 'MemAvailable: 178275656 kB' 'Buffers: 3896 kB' 'Cached: 10204288 kB' 'SwapCached: 0 kB' 'Active: 7236560 kB' 'Inactive: 3507356 kB' 'Active(anon): 6844552 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539544 kB' 'Mapped: 180696 kB' 'Shmem: 6308820 kB' 'KReclaimable: 233184 kB' 'Slab: 798532 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565348 kB' 'KernelStack: 20528 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8358452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315584 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.342 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.343 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.344 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87241008 kB' 'MemUsed: 10421676 kB' 'SwapCached: 0 kB' 'Active: 4887484 kB' 'Inactive: 3338124 kB' 'Active(anon): 4729944 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037168 kB' 'Mapped: 72600 kB' 'AnonPages: 191584 kB' 'Shmem: 4541504 kB' 'KernelStack: 11128 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 383260 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 257956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.345 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.346 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88163240 kB' 'MemUsed: 5555228 kB' 'SwapCached: 0 kB' 'Active: 2342852 kB' 'Inactive: 169232 kB' 'Active(anon): 2108384 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169232 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2171036 kB' 'Mapped: 107284 kB' 'AnonPages: 341096 kB' 'Shmem: 1767336 kB' 'KernelStack: 9336 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107880 kB' 'Slab: 415240 kB' 'SReclaimable: 107880 kB' 'SUnreclaim: 307360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.347 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.348 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.349 node0=512 expecting 512 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.349 node1=512 expecting 512 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.349 00:03:18.349 real 0m3.072s 00:03:18.349 user 0m1.282s 00:03:18.349 sys 0m1.857s 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.349 12:37:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.349 ************************************ 00:03:18.349 END TEST even_2G_alloc 00:03:18.349 ************************************ 00:03:18.349 12:37:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.349 12:37:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:18.349 12:37:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.349 12:37:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.349 12:37:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.349 ************************************ 00:03:18.349 START TEST odd_alloc 00:03:18.349 ************************************ 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.349 12:37:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.672 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.672 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.672 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175427304 kB' 'MemAvailable: 178298808 kB' 'Buffers: 3896 kB' 'Cached: 10204404 kB' 'SwapCached: 0 kB' 'Active: 7230676 kB' 'Inactive: 3507356 kB' 'Active(anon): 6838668 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532448 kB' 'Mapped: 179980 kB' 'Shmem: 6308936 kB' 'KReclaimable: 233184 kB' 'Slab: 798708 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565524 kB' 'KernelStack: 20496 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8341676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.672 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175427220 kB' 'MemAvailable: 178298724 kB' 'Buffers: 3896 kB' 'Cached: 10204408 kB' 'SwapCached: 0 kB' 'Active: 7229924 kB' 'Inactive: 3507356 kB' 'Active(anon): 6837916 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532208 kB' 'Mapped: 179900 kB' 'Shmem: 6308940 kB' 'KReclaimable: 233184 kB' 'Slab: 798708 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565524 kB' 'KernelStack: 20496 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8341696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.673 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.674 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175427220 kB' 'MemAvailable: 178298724 kB' 'Buffers: 3896 kB' 'Cached: 10204408 kB' 'SwapCached: 0 kB' 'Active: 7229924 kB' 'Inactive: 3507356 kB' 'Active(anon): 6837916 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532208 kB' 'Mapped: 179900 kB' 'Shmem: 6308940 kB' 'KReclaimable: 233184 kB' 'Slab: 798708 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565524 kB' 'KernelStack: 20496 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8341716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.675 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:21.676 nr_hugepages=1025 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.676 resv_hugepages=0 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.676 surplus_hugepages=0 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.676 anon_hugepages=0 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175427780 kB' 'MemAvailable: 178299284 kB' 'Buffers: 3896 kB' 'Cached: 10204444 kB' 'SwapCached: 0 kB' 'Active: 7229956 kB' 'Inactive: 3507356 kB' 'Active(anon): 6837948 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532208 kB' 'Mapped: 179900 kB' 'Shmem: 6308976 kB' 'KReclaimable: 233184 kB' 'Slab: 798700 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565516 kB' 'KernelStack: 20496 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8341736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.676 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87251948 kB' 'MemUsed: 10410736 kB' 'SwapCached: 0 kB' 'Active: 4886800 kB' 'Inactive: 3338124 kB' 'Active(anon): 4729260 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037304 kB' 'Mapped: 72600 kB' 'AnonPages: 190732 kB' 'Shmem: 4541640 kB' 'KernelStack: 11096 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 383388 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 258084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.677 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88175684 kB' 'MemUsed: 5542784 kB' 'SwapCached: 0 kB' 'Active: 2342840 kB' 'Inactive: 169232 kB' 'Active(anon): 2108372 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169232 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2171076 kB' 'Mapped: 107300 kB' 'AnonPages: 341100 kB' 'Shmem: 1767376 kB' 'KernelStack: 9384 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107880 kB' 'Slab: 415312 kB' 'SReclaimable: 107880 kB' 'SUnreclaim: 307432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.678 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:21.679 node0=512 expecting 513 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:21.679 node1=513 expecting 512 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:21.679 00:03:21.679 real 0m3.022s 00:03:21.679 user 0m1.222s 00:03:21.679 sys 0m1.870s 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.679 12:37:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.679 ************************************ 00:03:21.679 END TEST odd_alloc 00:03:21.679 ************************************ 00:03:21.679 12:37:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:21.679 12:37:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:21.679 12:37:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.679 12:37:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.679 12:37:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.679 ************************************ 00:03:21.679 START TEST custom_alloc 00:03:21.679 ************************************ 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.679 12:37:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.214 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.214 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.214 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.214 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174388928 kB' 'MemAvailable: 177260432 kB' 'Buffers: 3896 kB' 'Cached: 10204560 kB' 'SwapCached: 0 kB' 'Active: 7230820 kB' 'Inactive: 3507356 kB' 'Active(anon): 6838812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532888 kB' 'Mapped: 180076 kB' 'Shmem: 6309092 kB' 'KReclaimable: 233184 kB' 'Slab: 798720 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565536 kB' 'KernelStack: 20496 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8342236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.478 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174390144 kB' 'MemAvailable: 177261648 kB' 'Buffers: 3896 kB' 'Cached: 10204564 kB' 'SwapCached: 0 kB' 'Active: 7230776 kB' 'Inactive: 3507356 kB' 'Active(anon): 6838768 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532836 kB' 'Mapped: 179916 kB' 'Shmem: 6309096 kB' 'KReclaimable: 233184 kB' 'Slab: 798760 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565576 kB' 'KernelStack: 20512 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8343380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.479 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174390600 kB' 'MemAvailable: 177262104 kB' 'Buffers: 3896 kB' 'Cached: 10204580 kB' 'SwapCached: 0 kB' 'Active: 7230916 kB' 'Inactive: 3507356 kB' 'Active(anon): 6838908 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532992 kB' 'Mapped: 179916 kB' 'Shmem: 6309112 kB' 'KReclaimable: 233184 kB' 'Slab: 798760 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565576 kB' 'KernelStack: 20400 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8344892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.480 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:24.481 nr_hugepages=1536 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.481 resv_hugepages=0 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.481 surplus_hugepages=0 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.481 anon_hugepages=0 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174389520 kB' 'MemAvailable: 177261024 kB' 'Buffers: 3896 kB' 'Cached: 10204604 kB' 'SwapCached: 0 kB' 'Active: 7231116 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839108 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533152 kB' 'Mapped: 179916 kB' 'Shmem: 6309136 kB' 'KReclaimable: 233184 kB' 'Slab: 798760 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565576 kB' 'KernelStack: 20624 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8344768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.481 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87255668 kB' 'MemUsed: 10407016 kB' 'SwapCached: 0 kB' 'Active: 4888336 kB' 'Inactive: 3338124 kB' 'Active(anon): 4730796 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037428 kB' 'Mapped: 72600 kB' 'AnonPages: 192168 kB' 'Shmem: 4541764 kB' 'KernelStack: 11096 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 383272 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 257968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87132524 kB' 'MemUsed: 6585944 kB' 'SwapCached: 0 kB' 'Active: 2343016 kB' 'Inactive: 169232 kB' 'Active(anon): 2108548 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169232 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2171088 kB' 'Mapped: 107316 kB' 'AnonPages: 341272 kB' 'Shmem: 1767388 kB' 'KernelStack: 9656 kB' 'PageTables: 5148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107880 kB' 'Slab: 415488 kB' 'SReclaimable: 107880 kB' 'SUnreclaim: 307608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.482 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.483 node0=512 expecting 512 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:24.483 node1=1024 expecting 1024 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:24.483 00:03:24.483 real 0m3.082s 00:03:24.483 user 0m1.231s 00:03:24.483 sys 0m1.919s 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.483 12:37:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.483 ************************************ 00:03:24.483 END TEST custom_alloc 00:03:24.483 ************************************ 00:03:24.483 12:37:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.483 12:37:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:24.483 12:37:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.483 12:37:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.483 12:37:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.742 ************************************ 00:03:24.742 START TEST no_shrink_alloc 00:03:24.742 ************************************ 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.742 12:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.277 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.277 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.277 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175417584 kB' 'MemAvailable: 178289088 kB' 'Buffers: 3896 kB' 'Cached: 10204712 kB' 'SwapCached: 0 kB' 'Active: 7232552 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840544 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534016 kB' 'Mapped: 180016 kB' 'Shmem: 6309244 kB' 'KReclaimable: 233184 kB' 'Slab: 798676 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565492 kB' 'KernelStack: 20752 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8345576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315788 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.542 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.543 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175416160 kB' 'MemAvailable: 178287664 kB' 'Buffers: 3896 kB' 'Cached: 10204716 kB' 'SwapCached: 0 kB' 'Active: 7231932 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839924 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533360 kB' 'Mapped: 180024 kB' 'Shmem: 6309248 kB' 'KReclaimable: 233184 kB' 'Slab: 798660 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565476 kB' 'KernelStack: 20688 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8345576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.544 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175414684 kB' 'MemAvailable: 178286188 kB' 'Buffers: 3896 kB' 'Cached: 10204716 kB' 'SwapCached: 0 kB' 'Active: 7231604 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839596 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533560 kB' 'Mapped: 179928 kB' 'Shmem: 6309248 kB' 'KReclaimable: 233184 kB' 'Slab: 798644 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565460 kB' 'KernelStack: 20640 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8345760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.545 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.546 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.547 nr_hugepages=1024 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.547 resv_hugepages=0 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.547 surplus_hugepages=0 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.547 anon_hugepages=0 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175416284 kB' 'MemAvailable: 178287788 kB' 'Buffers: 3896 kB' 'Cached: 10204716 kB' 'SwapCached: 0 kB' 'Active: 7232076 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840068 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534056 kB' 'Mapped: 179928 kB' 'Shmem: 6309248 kB' 'KReclaimable: 233184 kB' 'Slab: 798644 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 565460 kB' 'KernelStack: 20656 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8345784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.547 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.548 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86208596 kB' 'MemUsed: 11454088 kB' 'SwapCached: 0 kB' 'Active: 4888360 kB' 'Inactive: 3338124 kB' 'Active(anon): 4730820 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037564 kB' 'Mapped: 72608 kB' 'AnonPages: 192100 kB' 'Shmem: 4541900 kB' 'KernelStack: 11144 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 383164 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 257860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.549 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.550 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.551 node0=1024 expecting 1024 00:03:27.551 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.551 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:27.551 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:27.551 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:27.551 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.551 12:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.849 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.849 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.849 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.849 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175408360 kB' 'MemAvailable: 178279864 kB' 'Buffers: 3896 kB' 'Cached: 10204840 kB' 'SwapCached: 0 kB' 'Active: 7232332 kB' 'Inactive: 3507356 kB' 'Active(anon): 6840324 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533680 kB' 'Mapped: 180040 kB' 'Shmem: 6309372 kB' 'KReclaimable: 233184 kB' 'Slab: 798056 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 564872 kB' 'KernelStack: 20528 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8343660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.849 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175409320 kB' 'MemAvailable: 178280824 kB' 'Buffers: 3896 kB' 'Cached: 10204844 kB' 'SwapCached: 0 kB' 'Active: 7231516 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839508 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533388 kB' 'Mapped: 179940 kB' 'Shmem: 6309376 kB' 'KReclaimable: 233184 kB' 'Slab: 798020 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 564836 kB' 'KernelStack: 20512 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8343676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.850 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.851 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175409320 kB' 'MemAvailable: 178280824 kB' 'Buffers: 3896 kB' 'Cached: 10204844 kB' 'SwapCached: 0 kB' 'Active: 7231516 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839508 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533388 kB' 'Mapped: 179940 kB' 'Shmem: 6309376 kB' 'KReclaimable: 233184 kB' 'Slab: 798020 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 564836 kB' 'KernelStack: 20512 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8343700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.852 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.853 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.854 nr_hugepages=1024 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.854 resv_hugepages=0 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.854 surplus_hugepages=0 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.854 anon_hugepages=0 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175408608 kB' 'MemAvailable: 178280112 kB' 'Buffers: 3896 kB' 'Cached: 10204848 kB' 'SwapCached: 0 kB' 'Active: 7231684 kB' 'Inactive: 3507356 kB' 'Active(anon): 6839676 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533552 kB' 'Mapped: 179940 kB' 'Shmem: 6309380 kB' 'KReclaimable: 233184 kB' 'Slab: 798020 kB' 'SReclaimable: 233184 kB' 'SUnreclaim: 564836 kB' 'KernelStack: 20496 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8343720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 76800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2862036 kB' 'DirectMap2M: 14643200 kB' 'DirectMap1G: 184549376 kB' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86211428 kB' 'MemUsed: 11451256 kB' 'SwapCached: 0 kB' 'Active: 4889940 kB' 'Inactive: 3338124 kB' 'Active(anon): 4732400 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338124 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037632 kB' 'Mapped: 72600 kB' 'AnonPages: 193120 kB' 'Shmem: 4541968 kB' 'KernelStack: 11112 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125304 kB' 'Slab: 382880 kB' 'SReclaimable: 125304 kB' 'SUnreclaim: 257576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.857 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.858 node0=1024 expecting 1024 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.858 00:03:30.858 real 0m5.932s 00:03:30.858 user 0m2.365s 00:03:30.858 sys 0m3.701s 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.858 12:38:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.858 ************************************ 00:03:30.858 END TEST no_shrink_alloc 00:03:30.858 ************************************ 00:03:30.858 12:38:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.858 12:38:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.858 00:03:30.858 real 0m22.737s 00:03:30.858 user 0m8.901s 00:03:30.858 sys 0m13.534s 00:03:30.858 12:38:01 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.858 12:38:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.858 ************************************ 00:03:30.858 END TEST hugepages 00:03:30.858 ************************************ 00:03:30.858 12:38:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:30.858 12:38:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.858 12:38:01 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.858 12:38:01 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.858 12:38:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.858 ************************************ 00:03:30.858 START TEST driver 00:03:30.858 ************************************ 00:03:30.858 12:38:01 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.858 * Looking for test storage... 00:03:30.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.858 12:38:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:30.858 12:38:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.858 12:38:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.047 12:38:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:35.047 12:38:05 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.047 12:38:05 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.047 12:38:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:35.047 ************************************ 00:03:35.047 START TEST guess_driver 00:03:35.047 ************************************ 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:35.047 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:35.047 Looking for driver=vfio-pci 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.047 12:38:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.589 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.848 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.849 12:38:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.786 12:38:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.980 00:03:42.980 real 0m7.908s 00:03:42.980 user 0m2.383s 00:03:42.980 sys 0m4.022s 00:03:42.980 12:38:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.980 12:38:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.980 ************************************ 00:03:42.980 END TEST guess_driver 00:03:42.980 ************************************ 00:03:42.980 12:38:13 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:42.980 00:03:42.980 real 0m12.156s 00:03:42.980 user 0m3.578s 00:03:42.980 sys 0m6.267s 00:03:42.980 12:38:13 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.980 12:38:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.980 ************************************ 00:03:42.980 END TEST driver 00:03:42.980 ************************************ 00:03:42.980 12:38:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:42.980 12:38:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.980 12:38:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.980 12:38:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.980 12:38:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.980 ************************************ 00:03:42.980 START TEST devices 00:03:42.980 ************************************ 00:03:42.980 12:38:13 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.980 * Looking for test storage... 00:03:42.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.980 12:38:13 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:42.980 12:38:13 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:42.980 12:38:13 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.980 12:38:13 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.273 12:38:16 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:46.273 12:38:16 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:46.273 12:38:16 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:46.273 12:38:16 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:46.273 No valid GPT data, bailing 00:03:46.273 12:38:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.273 12:38:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:46.273 12:38:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:46.273 12:38:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:46.273 12:38:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:46.273 12:38:17 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:46.273 12:38:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:46.273 12:38:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.273 12:38:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.273 12:38:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.273 ************************************ 00:03:46.273 START TEST nvme_mount 00:03:46.273 ************************************ 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.273 12:38:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:47.272 Creating new GPT entries in memory. 00:03:47.272 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.272 other utilities. 00:03:47.272 12:38:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.272 12:38:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.272 12:38:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.272 12:38:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.272 12:38:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.206 Creating new GPT entries in memory. 00:03:48.206 The operation has completed successfully. 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1514467 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:48.206 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.464 12:38:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.051 12:38:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.051 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.051 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:51.051 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:51.311 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:51.311 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:51.570 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:51.570 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:51.570 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:51.570 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.570 12:38:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.107 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.367 12:38:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.663 12:38:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.663 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.663 00:03:57.663 real 0m11.025s 00:03:57.663 user 0m3.262s 00:03:57.663 sys 0m5.610s 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.663 12:38:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.663 ************************************ 00:03:57.663 END TEST nvme_mount 00:03:57.663 ************************************ 00:03:57.663 12:38:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:57.663 12:38:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.663 12:38:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.663 12:38:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.663 12:38:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.663 ************************************ 00:03:57.663 START TEST dm_mount 00:03:57.663 ************************************ 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:57.663 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.664 12:38:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:58.234 Creating new GPT entries in memory. 00:03:58.234 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.234 other utilities. 00:03:58.234 12:38:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.234 12:38:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.234 12:38:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.234 12:38:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.234 12:38:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.615 Creating new GPT entries in memory. 00:03:59.615 The operation has completed successfully. 00:03:59.615 12:38:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.615 12:38:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.615 12:38:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.615 12:38:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.615 12:38:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.587 The operation has completed successfully. 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1518657 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.587 12:38:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.123 12:38:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.382 12:38:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.919 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.179 12:38:36 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.179 12:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.179 12:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.179 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.179 12:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.179 12:38:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.179 00:04:06.179 real 0m8.931s 00:04:06.179 user 0m2.236s 00:04:06.179 sys 0m3.727s 00:04:06.179 12:38:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.179 12:38:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.179 ************************************ 00:04:06.179 END TEST dm_mount 00:04:06.179 ************************************ 00:04:06.179 12:38:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.179 12:38:37 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.439 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.439 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.439 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.439 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.439 12:38:37 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.439 00:04:06.439 real 0m23.659s 00:04:06.439 user 0m6.773s 00:04:06.439 sys 0m11.633s 00:04:06.439 12:38:37 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.439 12:38:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.439 ************************************ 00:04:06.439 END TEST devices 00:04:06.439 ************************************ 00:04:06.699 12:38:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.699 00:04:06.699 real 1m19.426s 00:04:06.699 user 0m26.392s 00:04:06.699 sys 0m43.835s 00:04:06.699 12:38:37 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.699 12:38:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.699 ************************************ 00:04:06.699 END TEST setup.sh 00:04:06.699 ************************************ 00:04:06.699 12:38:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.699 12:38:37 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:09.235 Hugepages 00:04:09.235 node hugesize free / total 00:04:09.235 node0 1048576kB 0 / 0 00:04:09.235 node0 2048kB 2048 / 2048 00:04:09.235 node1 1048576kB 0 / 0 00:04:09.235 node1 2048kB 0 / 0 00:04:09.235 00:04:09.235 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.235 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:09.235 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:09.494 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:09.494 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:09.494 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:09.494 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:09.494 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:09.494 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:09.494 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:09.494 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:09.494 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:09.494 12:38:40 -- spdk/autotest.sh@130 -- # uname -s 00:04:09.494 12:38:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:09.494 12:38:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:09.494 12:38:40 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.833 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.833 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.092 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.350 12:38:44 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:14.288 12:38:45 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:14.288 12:38:45 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:14.288 12:38:45 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.288 12:38:45 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:14.288 12:38:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.288 12:38:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.288 12:38:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.288 12:38:45 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.288 12:38:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:14.288 12:38:45 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:14.288 12:38:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:14.288 12:38:45 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.578 Waiting for block devices as requested 00:04:17.578 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:17.578 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:17.578 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:17.578 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.578 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.578 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:17.578 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:17.837 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:17.837 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:17.837 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.095 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.095 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.095 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.095 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.353 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:18.353 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:18.353 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.612 12:38:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:18.612 12:38:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:18.612 12:38:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:18.612 12:38:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:18.612 12:38:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:18.612 12:38:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:18.612 12:38:49 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:18.612 12:38:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:18.612 12:38:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:18.612 12:38:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:18.612 12:38:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:18.612 12:38:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:18.612 12:38:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:18.612 12:38:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:18.612 12:38:49 -- common/autotest_common.sh@1557 -- # continue 00:04:18.612 12:38:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:18.612 12:38:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.613 12:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:18.613 12:38:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:18.613 12:38:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.613 12:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:18.613 12:38:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.902 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.902 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.162 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.422 12:38:53 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:22.422 12:38:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.422 12:38:53 -- common/autotest_common.sh@10 -- # set +x 00:04:22.422 12:38:53 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:22.422 12:38:53 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:22.422 12:38:53 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.422 12:38:53 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:22.422 12:38:53 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:22.422 12:38:53 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:22.422 12:38:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:22.422 12:38:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:22.422 12:38:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.422 12:38:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:22.422 12:38:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:22.422 12:38:53 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:22.422 12:38:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:22.422 12:38:53 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:22.422 12:38:53 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:22.422 12:38:53 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:22.422 12:38:53 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:22.422 12:38:53 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:22.422 12:38:53 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:22.422 12:38:53 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:22.422 12:38:53 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1527453 00:04:22.422 12:38:53 -- common/autotest_common.sh@1598 -- # waitforlisten 1527453 00:04:22.422 12:38:53 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.422 12:38:53 -- common/autotest_common.sh@829 -- # '[' -z 1527453 ']' 00:04:22.422 12:38:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.422 12:38:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.422 12:38:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.422 12:38:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.422 12:38:53 -- common/autotest_common.sh@10 -- # set +x 00:04:22.681 [2024-07-15 12:38:53.382479] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:22.681 [2024-07-15 12:38:53.382525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527453 ] 00:04:22.681 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.681 [2024-07-15 12:38:53.447901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.681 [2024-07-15 12:38:53.527795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.248 12:38:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.248 12:38:54 -- common/autotest_common.sh@862 -- # return 0 00:04:23.248 12:38:54 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:23.248 12:38:54 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:23.248 12:38:54 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:26.535 nvme0n1 00:04:26.535 12:38:57 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:26.535 [2024-07-15 12:38:57.320338] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:26.535 request: 00:04:26.535 { 00:04:26.535 "nvme_ctrlr_name": "nvme0", 00:04:26.536 "password": "test", 00:04:26.536 "method": "bdev_nvme_opal_revert", 00:04:26.536 "req_id": 1 00:04:26.536 } 00:04:26.536 Got JSON-RPC error response 00:04:26.536 response: 00:04:26.536 { 00:04:26.536 "code": -32602, 00:04:26.536 "message": "Invalid parameters" 00:04:26.536 } 00:04:26.536 12:38:57 -- common/autotest_common.sh@1604 -- # true 00:04:26.536 12:38:57 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:26.536 12:38:57 -- common/autotest_common.sh@1608 -- # killprocess 1527453 00:04:26.536 12:38:57 -- common/autotest_common.sh@948 -- # '[' -z 1527453 ']' 00:04:26.536 12:38:57 -- common/autotest_common.sh@952 -- # kill -0 1527453 00:04:26.536 12:38:57 -- common/autotest_common.sh@953 -- # uname 00:04:26.536 12:38:57 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.536 12:38:57 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1527453 00:04:26.536 12:38:57 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.536 12:38:57 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.536 12:38:57 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1527453' 00:04:26.536 killing process with pid 1527453 00:04:26.536 12:38:57 -- common/autotest_common.sh@967 -- # kill 1527453 00:04:26.536 12:38:57 -- common/autotest_common.sh@972 -- # wait 1527453 00:04:28.439 12:38:58 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:28.439 12:38:58 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:28.439 12:38:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:28.439 12:38:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:28.439 12:38:58 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:28.439 12:38:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.439 12:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:28.439 12:38:58 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:28.439 12:38:58 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:28.439 12:38:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.439 12:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.440 12:38:58 -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 ************************************ 00:04:28.440 START TEST env 00:04:28.440 ************************************ 00:04:28.440 12:38:58 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:28.440 * Looking for test storage... 00:04:28.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:28.440 12:38:59 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.440 12:38:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.440 12:38:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.440 12:38:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 ************************************ 00:04:28.440 START TEST env_memory 00:04:28.440 ************************************ 00:04:28.440 12:38:59 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.440 00:04:28.440 00:04:28.440 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.440 http://cunit.sourceforge.net/ 00:04:28.440 00:04:28.440 00:04:28.440 Suite: memory 00:04:28.440 Test: alloc and free memory map ...[2024-07-15 12:38:59.159154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.440 passed 00:04:28.440 Test: mem map translation ...[2024-07-15 12:38:59.178292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.440 [2024-07-15 12:38:59.178306] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.440 [2024-07-15 12:38:59.178343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.440 [2024-07-15 12:38:59.178349] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.440 passed 00:04:28.440 Test: mem map registration ...[2024-07-15 12:38:59.216875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:28.440 [2024-07-15 12:38:59.216892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:28.440 passed 00:04:28.440 Test: mem map adjacent registrations ...passed 00:04:28.440 00:04:28.440 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.440 suites 1 1 n/a 0 0 00:04:28.440 tests 4 4 4 0 0 00:04:28.440 asserts 152 152 152 0 n/a 00:04:28.440 00:04:28.440 Elapsed time = 0.137 seconds 00:04:28.440 00:04:28.440 real 0m0.149s 00:04:28.440 user 0m0.138s 00:04:28.440 sys 0m0.011s 00:04:28.440 12:38:59 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.440 12:38:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 ************************************ 00:04:28.440 END TEST env_memory 00:04:28.440 ************************************ 00:04:28.440 12:38:59 env -- common/autotest_common.sh@1142 -- # return 0 00:04:28.440 12:38:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.440 12:38:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.440 12:38:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.440 12:38:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 ************************************ 00:04:28.440 START TEST env_vtophys 00:04:28.440 ************************************ 00:04:28.440 12:38:59 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.440 EAL: lib.eal log level changed from notice to debug 00:04:28.440 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.440 EAL: Detected lcore 1 as core 1 on socket 0 00:04:28.440 EAL: Detected lcore 2 as core 2 on socket 0 00:04:28.440 EAL: Detected lcore 3 as core 3 on socket 0 00:04:28.440 EAL: Detected lcore 4 as core 4 on socket 0 00:04:28.440 EAL: Detected lcore 5 as core 5 on socket 0 00:04:28.440 EAL: Detected lcore 6 as core 6 on socket 0 00:04:28.440 EAL: Detected lcore 7 as core 8 on socket 0 00:04:28.440 EAL: Detected lcore 8 as core 9 on socket 0 00:04:28.440 EAL: Detected lcore 9 as core 10 on socket 0 00:04:28.440 EAL: Detected lcore 10 as core 11 on socket 0 00:04:28.440 EAL: Detected lcore 11 as core 12 on socket 0 00:04:28.440 EAL: Detected lcore 12 as core 13 on socket 0 00:04:28.440 EAL: Detected lcore 13 as core 16 on socket 0 00:04:28.440 EAL: Detected lcore 14 as core 17 on socket 0 00:04:28.440 EAL: Detected lcore 15 as core 18 on socket 0 00:04:28.440 EAL: Detected lcore 16 as core 19 on socket 0 00:04:28.440 EAL: Detected lcore 17 as core 20 on socket 0 00:04:28.440 EAL: Detected lcore 18 as core 21 on socket 0 00:04:28.440 EAL: Detected lcore 19 as core 25 on socket 0 00:04:28.440 EAL: Detected lcore 20 as core 26 on socket 0 00:04:28.440 EAL: Detected lcore 21 as core 27 on socket 0 00:04:28.440 EAL: Detected lcore 22 as core 28 on socket 0 00:04:28.440 EAL: Detected lcore 23 as core 29 on socket 0 00:04:28.440 EAL: Detected lcore 24 as core 0 on socket 1 00:04:28.440 EAL: Detected lcore 25 as core 1 on socket 1 00:04:28.440 EAL: Detected lcore 26 as core 2 on socket 1 00:04:28.440 EAL: Detected lcore 27 as core 3 on socket 1 00:04:28.440 EAL: Detected lcore 28 as core 4 on socket 1 00:04:28.440 EAL: Detected lcore 29 as core 5 on socket 1 00:04:28.440 EAL: Detected lcore 30 as core 6 on socket 1 00:04:28.440 EAL: Detected lcore 31 as core 9 on socket 1 00:04:28.440 EAL: Detected lcore 32 as core 10 on socket 1 00:04:28.440 EAL: Detected lcore 33 as core 11 on socket 1 00:04:28.440 EAL: Detected lcore 34 as core 12 on socket 1 00:04:28.440 EAL: Detected lcore 35 as core 13 on socket 1 00:04:28.440 EAL: Detected lcore 36 as core 16 on socket 1 00:04:28.440 EAL: Detected lcore 37 as core 17 on socket 1 00:04:28.440 EAL: Detected lcore 38 as core 18 on socket 1 00:04:28.440 EAL: Detected lcore 39 as core 19 on socket 1 00:04:28.440 EAL: Detected lcore 40 as core 20 on socket 1 00:04:28.440 EAL: Detected lcore 41 as core 21 on socket 1 00:04:28.440 EAL: Detected lcore 42 as core 24 on socket 1 00:04:28.440 EAL: Detected lcore 43 as core 25 on socket 1 00:04:28.440 EAL: Detected lcore 44 as core 26 on socket 1 00:04:28.440 EAL: Detected lcore 45 as core 27 on socket 1 00:04:28.440 EAL: Detected lcore 46 as core 28 on socket 1 00:04:28.440 EAL: Detected lcore 47 as core 29 on socket 1 00:04:28.440 EAL: Detected lcore 48 as core 0 on socket 0 00:04:28.440 EAL: Detected lcore 49 as core 1 on socket 0 00:04:28.440 EAL: Detected lcore 50 as core 2 on socket 0 00:04:28.440 EAL: Detected lcore 51 as core 3 on socket 0 00:04:28.440 EAL: Detected lcore 52 as core 4 on socket 0 00:04:28.440 EAL: Detected lcore 53 as core 5 on socket 0 00:04:28.440 EAL: Detected lcore 54 as core 6 on socket 0 00:04:28.440 EAL: Detected lcore 55 as core 8 on socket 0 00:04:28.440 EAL: Detected lcore 56 as core 9 on socket 0 00:04:28.440 EAL: Detected lcore 57 as core 10 on socket 0 00:04:28.440 EAL: Detected lcore 58 as core 11 on socket 0 00:04:28.440 EAL: Detected lcore 59 as core 12 on socket 0 00:04:28.440 EAL: Detected lcore 60 as core 13 on socket 0 00:04:28.440 EAL: Detected lcore 61 as core 16 on socket 0 00:04:28.440 EAL: Detected lcore 62 as core 17 on socket 0 00:04:28.440 EAL: Detected lcore 63 as core 18 on socket 0 00:04:28.440 EAL: Detected lcore 64 as core 19 on socket 0 00:04:28.440 EAL: Detected lcore 65 as core 20 on socket 0 00:04:28.440 EAL: Detected lcore 66 as core 21 on socket 0 00:04:28.440 EAL: Detected lcore 67 as core 25 on socket 0 00:04:28.440 EAL: Detected lcore 68 as core 26 on socket 0 00:04:28.440 EAL: Detected lcore 69 as core 27 on socket 0 00:04:28.440 EAL: Detected lcore 70 as core 28 on socket 0 00:04:28.440 EAL: Detected lcore 71 as core 29 on socket 0 00:04:28.440 EAL: Detected lcore 72 as core 0 on socket 1 00:04:28.440 EAL: Detected lcore 73 as core 1 on socket 1 00:04:28.440 EAL: Detected lcore 74 as core 2 on socket 1 00:04:28.440 EAL: Detected lcore 75 as core 3 on socket 1 00:04:28.440 EAL: Detected lcore 76 as core 4 on socket 1 00:04:28.440 EAL: Detected lcore 77 as core 5 on socket 1 00:04:28.440 EAL: Detected lcore 78 as core 6 on socket 1 00:04:28.440 EAL: Detected lcore 79 as core 9 on socket 1 00:04:28.440 EAL: Detected lcore 80 as core 10 on socket 1 00:04:28.440 EAL: Detected lcore 81 as core 11 on socket 1 00:04:28.440 EAL: Detected lcore 82 as core 12 on socket 1 00:04:28.440 EAL: Detected lcore 83 as core 13 on socket 1 00:04:28.440 EAL: Detected lcore 84 as core 16 on socket 1 00:04:28.440 EAL: Detected lcore 85 as core 17 on socket 1 00:04:28.440 EAL: Detected lcore 86 as core 18 on socket 1 00:04:28.440 EAL: Detected lcore 87 as core 19 on socket 1 00:04:28.440 EAL: Detected lcore 88 as core 20 on socket 1 00:04:28.440 EAL: Detected lcore 89 as core 21 on socket 1 00:04:28.440 EAL: Detected lcore 90 as core 24 on socket 1 00:04:28.440 EAL: Detected lcore 91 as core 25 on socket 1 00:04:28.440 EAL: Detected lcore 92 as core 26 on socket 1 00:04:28.440 EAL: Detected lcore 93 as core 27 on socket 1 00:04:28.440 EAL: Detected lcore 94 as core 28 on socket 1 00:04:28.440 EAL: Detected lcore 95 as core 29 on socket 1 00:04:28.440 EAL: Maximum logical cores by configuration: 128 00:04:28.440 EAL: Detected CPU lcores: 96 00:04:28.440 EAL: Detected NUMA nodes: 2 00:04:28.440 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.440 EAL: Detected shared linkage of DPDK 00:04:28.440 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.440 EAL: Bus pci wants IOVA as 'DC' 00:04:28.440 EAL: Buses did not request a specific IOVA mode. 00:04:28.440 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:28.440 EAL: Selected IOVA mode 'VA' 00:04:28.440 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.440 EAL: Probing VFIO support... 00:04:28.440 EAL: IOMMU type 1 (Type 1) is supported 00:04:28.440 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:28.440 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:28.440 EAL: VFIO support initialized 00:04:28.440 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.440 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.440 EAL: Setting up physically contiguous memory... 00:04:28.440 EAL: Setting maximum number of open files to 524288 00:04:28.440 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.440 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:28.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.440 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.440 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.440 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.441 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:28.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:28.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:28.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:28.441 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.441 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:28.441 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.441 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.441 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:28.441 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:28.441 EAL: Hugepages will be freed exactly as allocated. 00:04:28.441 EAL: No shared files mode enabled, IPC is disabled 00:04:28.441 EAL: No shared files mode enabled, IPC is disabled 00:04:28.441 EAL: TSC frequency is ~2300000 KHz 00:04:28.441 EAL: Main lcore 0 is ready (tid=7fdf20b6ba00;cpuset=[0]) 00:04:28.441 EAL: Trying to obtain current memory policy. 00:04:28.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.441 EAL: Restoring previous memory policy: 0 00:04:28.441 EAL: request: mp_malloc_sync 00:04:28.441 EAL: No shared files mode enabled, IPC is disabled 00:04:28.441 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.441 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.700 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.700 00:04:28.700 00:04:28.700 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.700 http://cunit.sourceforge.net/ 00:04:28.700 00:04:28.700 00:04:28.700 Suite: components_suite 00:04:28.700 Test: vtophys_malloc_test ...passed 00:04:28.700 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 4MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 4MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 6MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 6MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 10MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 10MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 18MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 18MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 34MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 34MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 66MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 66MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 130MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 130MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.700 EAL: Restoring previous memory policy: 4 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was expanded by 258MB 00:04:28.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.700 EAL: request: mp_malloc_sync 00:04:28.700 EAL: No shared files mode enabled, IPC is disabled 00:04:28.700 EAL: Heap on socket 0 was shrunk by 258MB 00:04:28.700 EAL: Trying to obtain current memory policy. 00:04:28.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.959 EAL: Restoring previous memory policy: 4 00:04:28.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.959 EAL: request: mp_malloc_sync 00:04:28.959 EAL: No shared files mode enabled, IPC is disabled 00:04:28.959 EAL: Heap on socket 0 was expanded by 514MB 00:04:28.959 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.959 EAL: request: mp_malloc_sync 00:04:28.959 EAL: No shared files mode enabled, IPC is disabled 00:04:28.959 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.959 EAL: Trying to obtain current memory policy. 00:04:28.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.217 EAL: Restoring previous memory policy: 4 00:04:29.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.217 EAL: request: mp_malloc_sync 00:04:29.217 EAL: No shared files mode enabled, IPC is disabled 00:04:29.217 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.476 EAL: request: mp_malloc_sync 00:04:29.476 EAL: No shared files mode enabled, IPC is disabled 00:04:29.476 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.476 passed 00:04:29.476 00:04:29.476 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.476 suites 1 1 n/a 0 0 00:04:29.476 tests 2 2 2 0 0 00:04:29.476 asserts 497 497 497 0 n/a 00:04:29.476 00:04:29.476 Elapsed time = 0.977 seconds 00:04:29.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.476 EAL: request: mp_malloc_sync 00:04:29.476 EAL: No shared files mode enabled, IPC is disabled 00:04:29.476 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.476 EAL: No shared files mode enabled, IPC is disabled 00:04:29.476 EAL: No shared files mode enabled, IPC is disabled 00:04:29.476 EAL: No shared files mode enabled, IPC is disabled 00:04:29.735 00:04:29.735 real 0m1.103s 00:04:29.735 user 0m0.652s 00:04:29.735 sys 0m0.421s 00:04:29.735 12:39:00 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.735 12:39:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.735 ************************************ 00:04:29.735 END TEST env_vtophys 00:04:29.735 ************************************ 00:04:29.735 12:39:00 env -- common/autotest_common.sh@1142 -- # return 0 00:04:29.735 12:39:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.735 12:39:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.735 12:39:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.735 12:39:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.735 ************************************ 00:04:29.735 START TEST env_pci 00:04:29.735 ************************************ 00:04:29.735 12:39:00 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:29.735 00:04:29.735 00:04:29.735 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.735 http://cunit.sourceforge.net/ 00:04:29.735 00:04:29.735 00:04:29.735 Suite: pci 00:04:29.735 Test: pci_hook ...[2024-07-15 12:39:00.518163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1528801 has claimed it 00:04:29.735 EAL: Cannot find device (10000:00:01.0) 00:04:29.735 EAL: Failed to attach device on primary process 00:04:29.735 passed 00:04:29.735 00:04:29.735 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.735 suites 1 1 n/a 0 0 00:04:29.735 tests 1 1 1 0 0 00:04:29.735 asserts 25 25 25 0 n/a 00:04:29.735 00:04:29.735 Elapsed time = 0.029 seconds 00:04:29.735 00:04:29.735 real 0m0.049s 00:04:29.735 user 0m0.014s 00:04:29.735 sys 0m0.035s 00:04:29.735 12:39:00 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.735 12:39:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.735 ************************************ 00:04:29.735 END TEST env_pci 00:04:29.735 ************************************ 00:04:29.735 12:39:00 env -- common/autotest_common.sh@1142 -- # return 0 00:04:29.735 12:39:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.735 12:39:00 env -- env/env.sh@15 -- # uname 00:04:29.735 12:39:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.735 12:39:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.735 12:39:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.735 12:39:00 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:29.735 12:39:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.735 12:39:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.735 ************************************ 00:04:29.735 START TEST env_dpdk_post_init 00:04:29.735 ************************************ 00:04:29.735 12:39:00 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.735 EAL: Detected CPU lcores: 96 00:04:29.735 EAL: Detected NUMA nodes: 2 00:04:29.735 EAL: Detected shared linkage of DPDK 00:04:29.735 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.735 EAL: Selected IOVA mode 'VA' 00:04:29.735 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.735 EAL: VFIO support initialized 00:04:29.735 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.995 EAL: Using IOMMU type 1 (Type 1) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:29.995 EAL: Ignore mapping IO port bar(1) 00:04:29.995 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:30.931 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:30.931 EAL: Ignore mapping IO port bar(1) 00:04:30.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:34.213 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:34.213 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:34.213 Starting DPDK initialization... 00:04:34.213 Starting SPDK post initialization... 00:04:34.213 SPDK NVMe probe 00:04:34.213 Attaching to 0000:5e:00.0 00:04:34.213 Attached to 0000:5e:00.0 00:04:34.213 Cleaning up... 00:04:34.213 00:04:34.213 real 0m4.350s 00:04:34.213 user 0m3.297s 00:04:34.213 sys 0m0.123s 00:04:34.213 12:39:04 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.213 12:39:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.213 ************************************ 00:04:34.213 END TEST env_dpdk_post_init 00:04:34.213 ************************************ 00:04:34.213 12:39:05 env -- common/autotest_common.sh@1142 -- # return 0 00:04:34.213 12:39:05 env -- env/env.sh@26 -- # uname 00:04:34.213 12:39:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:34.213 12:39:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.213 12:39:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.213 12:39:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.213 12:39:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.213 ************************************ 00:04:34.213 START TEST env_mem_callbacks 00:04:34.213 ************************************ 00:04:34.213 12:39:05 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:34.213 EAL: Detected CPU lcores: 96 00:04:34.213 EAL: Detected NUMA nodes: 2 00:04:34.213 EAL: Detected shared linkage of DPDK 00:04:34.213 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.213 EAL: Selected IOVA mode 'VA' 00:04:34.213 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.213 EAL: VFIO support initialized 00:04:34.213 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.213 00:04:34.213 00:04:34.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.213 http://cunit.sourceforge.net/ 00:04:34.213 00:04:34.213 00:04:34.213 Suite: memory 00:04:34.213 Test: test ... 00:04:34.213 register 0x200000200000 2097152 00:04:34.213 malloc 3145728 00:04:34.213 register 0x200000400000 4194304 00:04:34.213 buf 0x200000500000 len 3145728 PASSED 00:04:34.213 malloc 64 00:04:34.213 buf 0x2000004fff40 len 64 PASSED 00:04:34.213 malloc 4194304 00:04:34.213 register 0x200000800000 6291456 00:04:34.213 buf 0x200000a00000 len 4194304 PASSED 00:04:34.213 free 0x200000500000 3145728 00:04:34.213 free 0x2000004fff40 64 00:04:34.213 unregister 0x200000400000 4194304 PASSED 00:04:34.213 free 0x200000a00000 4194304 00:04:34.213 unregister 0x200000800000 6291456 PASSED 00:04:34.213 malloc 8388608 00:04:34.213 register 0x200000400000 10485760 00:04:34.213 buf 0x200000600000 len 8388608 PASSED 00:04:34.213 free 0x200000600000 8388608 00:04:34.213 unregister 0x200000400000 10485760 PASSED 00:04:34.213 passed 00:04:34.213 00:04:34.213 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.213 suites 1 1 n/a 0 0 00:04:34.213 tests 1 1 1 0 0 00:04:34.213 asserts 15 15 15 0 n/a 00:04:34.213 00:04:34.213 Elapsed time = 0.007 seconds 00:04:34.213 00:04:34.213 real 0m0.057s 00:04:34.213 user 0m0.019s 00:04:34.213 sys 0m0.038s 00:04:34.213 12:39:05 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.213 12:39:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:34.213 ************************************ 00:04:34.213 END TEST env_mem_callbacks 00:04:34.213 ************************************ 00:04:34.213 12:39:05 env -- common/autotest_common.sh@1142 -- # return 0 00:04:34.213 00:04:34.213 real 0m6.138s 00:04:34.213 user 0m4.301s 00:04:34.213 sys 0m0.908s 00:04:34.213 12:39:05 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.213 12:39:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.213 ************************************ 00:04:34.213 END TEST env 00:04:34.213 ************************************ 00:04:34.471 12:39:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.471 12:39:05 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.471 12:39:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.471 12:39:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.471 12:39:05 -- common/autotest_common.sh@10 -- # set +x 00:04:34.471 ************************************ 00:04:34.471 START TEST rpc 00:04:34.471 ************************************ 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:34.471 * Looking for test storage... 00:04:34.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.471 12:39:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1529709 00:04:34.471 12:39:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.471 12:39:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:34.471 12:39:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1529709 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@829 -- # '[' -z 1529709 ']' 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.471 12:39:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.471 [2024-07-15 12:39:05.349668] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:34.471 [2024-07-15 12:39:05.349718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529709 ] 00:04:34.471 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.471 [2024-07-15 12:39:05.416645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.728 [2024-07-15 12:39:05.491461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.728 [2024-07-15 12:39:05.491499] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1529709' to capture a snapshot of events at runtime. 00:04:34.728 [2024-07-15 12:39:05.491506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.729 [2024-07-15 12:39:05.491512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.729 [2024-07-15 12:39:05.491517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1529709 for offline analysis/debug. 00:04:34.729 [2024-07-15 12:39:05.491541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.299 12:39:06 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.299 12:39:06 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:35.299 12:39:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.299 12:39:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.299 12:39:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.299 12:39:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.299 12:39:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.299 12:39:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.299 12:39:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.299 ************************************ 00:04:35.299 START TEST rpc_integrity 00:04:35.299 ************************************ 00:04:35.300 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:35.300 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.300 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.300 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.300 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.300 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.300 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.300 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.300 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.300 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.300 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.640 { 00:04:35.640 "name": "Malloc0", 00:04:35.640 "aliases": [ 00:04:35.640 "ac63594e-b8e9-447b-ad8e-e5457d28c56e" 00:04:35.640 ], 00:04:35.640 "product_name": "Malloc disk", 00:04:35.640 "block_size": 512, 00:04:35.640 "num_blocks": 16384, 00:04:35.640 "uuid": "ac63594e-b8e9-447b-ad8e-e5457d28c56e", 00:04:35.640 "assigned_rate_limits": { 00:04:35.640 "rw_ios_per_sec": 0, 00:04:35.640 "rw_mbytes_per_sec": 0, 00:04:35.640 "r_mbytes_per_sec": 0, 00:04:35.640 "w_mbytes_per_sec": 0 00:04:35.640 }, 00:04:35.640 "claimed": false, 00:04:35.640 "zoned": false, 00:04:35.640 "supported_io_types": { 00:04:35.640 "read": true, 00:04:35.640 "write": true, 00:04:35.640 "unmap": true, 00:04:35.640 "flush": true, 00:04:35.640 "reset": true, 00:04:35.640 "nvme_admin": false, 00:04:35.640 "nvme_io": false, 00:04:35.640 "nvme_io_md": false, 00:04:35.640 "write_zeroes": true, 00:04:35.640 "zcopy": true, 00:04:35.640 "get_zone_info": false, 00:04:35.640 "zone_management": false, 00:04:35.640 "zone_append": false, 00:04:35.640 "compare": false, 00:04:35.640 "compare_and_write": false, 00:04:35.640 "abort": true, 00:04:35.640 "seek_hole": false, 00:04:35.640 "seek_data": false, 00:04:35.640 "copy": true, 00:04:35.640 "nvme_iov_md": false 00:04:35.640 }, 00:04:35.640 "memory_domains": [ 00:04:35.640 { 00:04:35.640 "dma_device_id": "system", 00:04:35.640 "dma_device_type": 1 00:04:35.640 }, 00:04:35.640 { 00:04:35.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.640 "dma_device_type": 2 00:04:35.640 } 00:04:35.640 ], 00:04:35.640 "driver_specific": {} 00:04:35.640 } 00:04:35.640 ]' 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.640 [2024-07-15 12:39:06.322026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.640 [2024-07-15 12:39:06.322054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.640 [2024-07-15 12:39:06.322068] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd0a2d0 00:04:35.640 [2024-07-15 12:39:06.322074] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.640 [2024-07-15 12:39:06.323167] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.640 [2024-07-15 12:39:06.323188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.640 Passthru0 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.640 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.640 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.640 { 00:04:35.640 "name": "Malloc0", 00:04:35.640 "aliases": [ 00:04:35.640 "ac63594e-b8e9-447b-ad8e-e5457d28c56e" 00:04:35.640 ], 00:04:35.640 "product_name": "Malloc disk", 00:04:35.640 "block_size": 512, 00:04:35.640 "num_blocks": 16384, 00:04:35.640 "uuid": "ac63594e-b8e9-447b-ad8e-e5457d28c56e", 00:04:35.640 "assigned_rate_limits": { 00:04:35.640 "rw_ios_per_sec": 0, 00:04:35.640 "rw_mbytes_per_sec": 0, 00:04:35.640 "r_mbytes_per_sec": 0, 00:04:35.640 "w_mbytes_per_sec": 0 00:04:35.640 }, 00:04:35.640 "claimed": true, 00:04:35.640 "claim_type": "exclusive_write", 00:04:35.640 "zoned": false, 00:04:35.640 "supported_io_types": { 00:04:35.640 "read": true, 00:04:35.640 "write": true, 00:04:35.640 "unmap": true, 00:04:35.640 "flush": true, 00:04:35.640 "reset": true, 00:04:35.640 "nvme_admin": false, 00:04:35.640 "nvme_io": false, 00:04:35.640 "nvme_io_md": false, 00:04:35.640 "write_zeroes": true, 00:04:35.640 "zcopy": true, 00:04:35.640 "get_zone_info": false, 00:04:35.640 "zone_management": false, 00:04:35.640 "zone_append": false, 00:04:35.640 "compare": false, 00:04:35.640 "compare_and_write": false, 00:04:35.640 "abort": true, 00:04:35.640 "seek_hole": false, 00:04:35.640 "seek_data": false, 00:04:35.640 "copy": true, 00:04:35.640 "nvme_iov_md": false 00:04:35.640 }, 00:04:35.640 "memory_domains": [ 00:04:35.640 { 00:04:35.640 "dma_device_id": "system", 00:04:35.640 "dma_device_type": 1 00:04:35.640 }, 00:04:35.640 { 00:04:35.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.640 "dma_device_type": 2 00:04:35.640 } 00:04:35.640 ], 00:04:35.640 "driver_specific": {} 00:04:35.640 }, 00:04:35.640 { 00:04:35.640 "name": "Passthru0", 00:04:35.641 "aliases": [ 00:04:35.641 "087571bd-66cf-5c3f-8d12-8e277e63a4d7" 00:04:35.641 ], 00:04:35.641 "product_name": "passthru", 00:04:35.641 "block_size": 512, 00:04:35.641 "num_blocks": 16384, 00:04:35.641 "uuid": "087571bd-66cf-5c3f-8d12-8e277e63a4d7", 00:04:35.641 "assigned_rate_limits": { 00:04:35.641 "rw_ios_per_sec": 0, 00:04:35.641 "rw_mbytes_per_sec": 0, 00:04:35.641 "r_mbytes_per_sec": 0, 00:04:35.641 "w_mbytes_per_sec": 0 00:04:35.641 }, 00:04:35.641 "claimed": false, 00:04:35.641 "zoned": false, 00:04:35.641 "supported_io_types": { 00:04:35.641 "read": true, 00:04:35.641 "write": true, 00:04:35.641 "unmap": true, 00:04:35.641 "flush": true, 00:04:35.641 "reset": true, 00:04:35.641 "nvme_admin": false, 00:04:35.641 "nvme_io": false, 00:04:35.641 "nvme_io_md": false, 00:04:35.641 "write_zeroes": true, 00:04:35.641 "zcopy": true, 00:04:35.641 "get_zone_info": false, 00:04:35.641 "zone_management": false, 00:04:35.641 "zone_append": false, 00:04:35.641 "compare": false, 00:04:35.641 "compare_and_write": false, 00:04:35.641 "abort": true, 00:04:35.641 "seek_hole": false, 00:04:35.641 "seek_data": false, 00:04:35.641 "copy": true, 00:04:35.641 "nvme_iov_md": false 00:04:35.641 }, 00:04:35.641 "memory_domains": [ 00:04:35.641 { 00:04:35.641 "dma_device_id": "system", 00:04:35.641 "dma_device_type": 1 00:04:35.641 }, 00:04:35.641 { 00:04:35.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.641 "dma_device_type": 2 00:04:35.641 } 00:04:35.641 ], 00:04:35.641 "driver_specific": { 00:04:35.641 "passthru": { 00:04:35.641 "name": "Passthru0", 00:04:35.641 "base_bdev_name": "Malloc0" 00:04:35.641 } 00:04:35.641 } 00:04:35.641 } 00:04:35.641 ]' 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.641 12:39:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.641 00:04:35.641 real 0m0.277s 00:04:35.641 user 0m0.173s 00:04:35.641 sys 0m0.036s 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.641 12:39:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 ************************************ 00:04:35.641 END TEST rpc_integrity 00:04:35.641 ************************************ 00:04:35.641 12:39:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.641 12:39:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.641 12:39:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.641 12:39:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.641 12:39:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 ************************************ 00:04:35.641 START TEST rpc_plugins 00:04:35.641 ************************************ 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:35.641 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.641 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.641 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.641 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.641 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.641 { 00:04:35.641 "name": "Malloc1", 00:04:35.641 "aliases": [ 00:04:35.641 "3232a3a0-f231-4c98-a58f-c0907f93fcc0" 00:04:35.641 ], 00:04:35.641 "product_name": "Malloc disk", 00:04:35.641 "block_size": 4096, 00:04:35.641 "num_blocks": 256, 00:04:35.641 "uuid": "3232a3a0-f231-4c98-a58f-c0907f93fcc0", 00:04:35.641 "assigned_rate_limits": { 00:04:35.641 "rw_ios_per_sec": 0, 00:04:35.641 "rw_mbytes_per_sec": 0, 00:04:35.641 "r_mbytes_per_sec": 0, 00:04:35.641 "w_mbytes_per_sec": 0 00:04:35.641 }, 00:04:35.641 "claimed": false, 00:04:35.641 "zoned": false, 00:04:35.641 "supported_io_types": { 00:04:35.641 "read": true, 00:04:35.641 "write": true, 00:04:35.641 "unmap": true, 00:04:35.641 "flush": true, 00:04:35.641 "reset": true, 00:04:35.641 "nvme_admin": false, 00:04:35.641 "nvme_io": false, 00:04:35.641 "nvme_io_md": false, 00:04:35.641 "write_zeroes": true, 00:04:35.641 "zcopy": true, 00:04:35.641 "get_zone_info": false, 00:04:35.641 "zone_management": false, 00:04:35.641 "zone_append": false, 00:04:35.641 "compare": false, 00:04:35.641 "compare_and_write": false, 00:04:35.641 "abort": true, 00:04:35.641 "seek_hole": false, 00:04:35.641 "seek_data": false, 00:04:35.641 "copy": true, 00:04:35.641 "nvme_iov_md": false 00:04:35.641 }, 00:04:35.641 "memory_domains": [ 00:04:35.641 { 00:04:35.641 "dma_device_id": "system", 00:04:35.641 "dma_device_type": 1 00:04:35.641 }, 00:04:35.641 { 00:04:35.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.641 "dma_device_type": 2 00:04:35.641 } 00:04:35.641 ], 00:04:35.641 "driver_specific": {} 00:04:35.641 } 00:04:35.641 ]' 00:04:35.641 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.900 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.900 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.900 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.900 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.900 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.900 12:39:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.900 00:04:35.900 real 0m0.141s 00:04:35.900 user 0m0.090s 00:04:35.900 sys 0m0.017s 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.900 12:39:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.900 ************************************ 00:04:35.900 END TEST rpc_plugins 00:04:35.900 ************************************ 00:04:35.900 12:39:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.900 12:39:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.900 12:39:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.900 12:39:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.900 12:39:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.900 ************************************ 00:04:35.900 START TEST rpc_trace_cmd_test 00:04:35.900 ************************************ 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.900 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1529709", 00:04:35.900 "tpoint_group_mask": "0x8", 00:04:35.900 "iscsi_conn": { 00:04:35.900 "mask": "0x2", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "scsi": { 00:04:35.900 "mask": "0x4", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "bdev": { 00:04:35.900 "mask": "0x8", 00:04:35.900 "tpoint_mask": "0xffffffffffffffff" 00:04:35.900 }, 00:04:35.900 "nvmf_rdma": { 00:04:35.900 "mask": "0x10", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "nvmf_tcp": { 00:04:35.900 "mask": "0x20", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "ftl": { 00:04:35.900 "mask": "0x40", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "blobfs": { 00:04:35.900 "mask": "0x80", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "dsa": { 00:04:35.900 "mask": "0x200", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "thread": { 00:04:35.900 "mask": "0x400", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "nvme_pcie": { 00:04:35.900 "mask": "0x800", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "iaa": { 00:04:35.900 "mask": "0x1000", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "nvme_tcp": { 00:04:35.900 "mask": "0x2000", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "bdev_nvme": { 00:04:35.900 "mask": "0x4000", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 }, 00:04:35.900 "sock": { 00:04:35.900 "mask": "0x8000", 00:04:35.900 "tpoint_mask": "0x0" 00:04:35.900 } 00:04:35.900 }' 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.900 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.159 00:04:36.159 real 0m0.219s 00:04:36.159 user 0m0.186s 00:04:36.159 sys 0m0.023s 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.159 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 ************************************ 00:04:36.159 END TEST rpc_trace_cmd_test 00:04:36.159 ************************************ 00:04:36.159 12:39:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.159 12:39:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.159 12:39:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.159 12:39:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.159 12:39:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.159 12:39:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.159 12:39:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 ************************************ 00:04:36.159 START TEST rpc_daemon_integrity 00:04:36.159 ************************************ 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.159 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.160 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.160 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.160 { 00:04:36.160 "name": "Malloc2", 00:04:36.160 "aliases": [ 00:04:36.160 "9830d64c-5d6b-4dd9-9c4c-5eb41efda9bf" 00:04:36.160 ], 00:04:36.160 "product_name": "Malloc disk", 00:04:36.160 "block_size": 512, 00:04:36.160 "num_blocks": 16384, 00:04:36.160 "uuid": "9830d64c-5d6b-4dd9-9c4c-5eb41efda9bf", 00:04:36.160 "assigned_rate_limits": { 00:04:36.160 "rw_ios_per_sec": 0, 00:04:36.160 "rw_mbytes_per_sec": 0, 00:04:36.160 "r_mbytes_per_sec": 0, 00:04:36.160 "w_mbytes_per_sec": 0 00:04:36.160 }, 00:04:36.160 "claimed": false, 00:04:36.160 "zoned": false, 00:04:36.160 "supported_io_types": { 00:04:36.160 "read": true, 00:04:36.160 "write": true, 00:04:36.160 "unmap": true, 00:04:36.160 "flush": true, 00:04:36.160 "reset": true, 00:04:36.160 "nvme_admin": false, 00:04:36.160 "nvme_io": false, 00:04:36.160 "nvme_io_md": false, 00:04:36.160 "write_zeroes": true, 00:04:36.160 "zcopy": true, 00:04:36.160 "get_zone_info": false, 00:04:36.160 "zone_management": false, 00:04:36.160 "zone_append": false, 00:04:36.160 "compare": false, 00:04:36.160 "compare_and_write": false, 00:04:36.160 "abort": true, 00:04:36.160 "seek_hole": false, 00:04:36.160 "seek_data": false, 00:04:36.160 "copy": true, 00:04:36.160 "nvme_iov_md": false 00:04:36.160 }, 00:04:36.160 "memory_domains": [ 00:04:36.160 { 00:04:36.160 "dma_device_id": "system", 00:04:36.160 "dma_device_type": 1 00:04:36.160 }, 00:04:36.160 { 00:04:36.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.160 "dma_device_type": 2 00:04:36.160 } 00:04:36.160 ], 00:04:36.160 "driver_specific": {} 00:04:36.160 } 00:04:36.160 ]' 00:04:36.160 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.419 [2024-07-15 12:39:07.160304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.419 [2024-07-15 12:39:07.160332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.419 [2024-07-15 12:39:07.160345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xea1ac0 00:04:36.419 [2024-07-15 12:39:07.160351] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.419 [2024-07-15 12:39:07.161345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.419 [2024-07-15 12:39:07.161365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.419 Passthru0 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.419 { 00:04:36.419 "name": "Malloc2", 00:04:36.419 "aliases": [ 00:04:36.419 "9830d64c-5d6b-4dd9-9c4c-5eb41efda9bf" 00:04:36.419 ], 00:04:36.419 "product_name": "Malloc disk", 00:04:36.419 "block_size": 512, 00:04:36.419 "num_blocks": 16384, 00:04:36.419 "uuid": "9830d64c-5d6b-4dd9-9c4c-5eb41efda9bf", 00:04:36.419 "assigned_rate_limits": { 00:04:36.419 "rw_ios_per_sec": 0, 00:04:36.419 "rw_mbytes_per_sec": 0, 00:04:36.419 "r_mbytes_per_sec": 0, 00:04:36.419 "w_mbytes_per_sec": 0 00:04:36.419 }, 00:04:36.419 "claimed": true, 00:04:36.419 "claim_type": "exclusive_write", 00:04:36.419 "zoned": false, 00:04:36.419 "supported_io_types": { 00:04:36.419 "read": true, 00:04:36.419 "write": true, 00:04:36.419 "unmap": true, 00:04:36.419 "flush": true, 00:04:36.419 "reset": true, 00:04:36.419 "nvme_admin": false, 00:04:36.419 "nvme_io": false, 00:04:36.419 "nvme_io_md": false, 00:04:36.419 "write_zeroes": true, 00:04:36.419 "zcopy": true, 00:04:36.419 "get_zone_info": false, 00:04:36.419 "zone_management": false, 00:04:36.419 "zone_append": false, 00:04:36.419 "compare": false, 00:04:36.419 "compare_and_write": false, 00:04:36.419 "abort": true, 00:04:36.419 "seek_hole": false, 00:04:36.419 "seek_data": false, 00:04:36.419 "copy": true, 00:04:36.419 "nvme_iov_md": false 00:04:36.419 }, 00:04:36.419 "memory_domains": [ 00:04:36.419 { 00:04:36.419 "dma_device_id": "system", 00:04:36.419 "dma_device_type": 1 00:04:36.419 }, 00:04:36.419 { 00:04:36.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.419 "dma_device_type": 2 00:04:36.419 } 00:04:36.419 ], 00:04:36.419 "driver_specific": {} 00:04:36.419 }, 00:04:36.419 { 00:04:36.419 "name": "Passthru0", 00:04:36.419 "aliases": [ 00:04:36.419 "659c7936-67f0-5da4-b57b-acd158ae65cd" 00:04:36.419 ], 00:04:36.419 "product_name": "passthru", 00:04:36.419 "block_size": 512, 00:04:36.419 "num_blocks": 16384, 00:04:36.419 "uuid": "659c7936-67f0-5da4-b57b-acd158ae65cd", 00:04:36.419 "assigned_rate_limits": { 00:04:36.419 "rw_ios_per_sec": 0, 00:04:36.419 "rw_mbytes_per_sec": 0, 00:04:36.419 "r_mbytes_per_sec": 0, 00:04:36.419 "w_mbytes_per_sec": 0 00:04:36.419 }, 00:04:36.419 "claimed": false, 00:04:36.419 "zoned": false, 00:04:36.419 "supported_io_types": { 00:04:36.419 "read": true, 00:04:36.419 "write": true, 00:04:36.419 "unmap": true, 00:04:36.419 "flush": true, 00:04:36.419 "reset": true, 00:04:36.419 "nvme_admin": false, 00:04:36.419 "nvme_io": false, 00:04:36.419 "nvme_io_md": false, 00:04:36.419 "write_zeroes": true, 00:04:36.419 "zcopy": true, 00:04:36.419 "get_zone_info": false, 00:04:36.419 "zone_management": false, 00:04:36.419 "zone_append": false, 00:04:36.419 "compare": false, 00:04:36.419 "compare_and_write": false, 00:04:36.419 "abort": true, 00:04:36.419 "seek_hole": false, 00:04:36.419 "seek_data": false, 00:04:36.419 "copy": true, 00:04:36.419 "nvme_iov_md": false 00:04:36.419 }, 00:04:36.419 "memory_domains": [ 00:04:36.419 { 00:04:36.419 "dma_device_id": "system", 00:04:36.419 "dma_device_type": 1 00:04:36.419 }, 00:04:36.419 { 00:04:36.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.419 "dma_device_type": 2 00:04:36.419 } 00:04:36.419 ], 00:04:36.419 "driver_specific": { 00:04:36.419 "passthru": { 00:04:36.419 "name": "Passthru0", 00:04:36.419 "base_bdev_name": "Malloc2" 00:04:36.419 } 00:04:36.419 } 00:04:36.419 } 00:04:36.419 ]' 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.419 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.420 12:39:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.420 00:04:36.420 real 0m0.276s 00:04:36.420 user 0m0.169s 00:04:36.420 sys 0m0.041s 00:04:36.420 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.420 12:39:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.420 ************************************ 00:04:36.420 END TEST rpc_daemon_integrity 00:04:36.420 ************************************ 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.420 12:39:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.420 12:39:07 rpc -- rpc/rpc.sh@84 -- # killprocess 1529709 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@948 -- # '[' -z 1529709 ']' 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@952 -- # kill -0 1529709 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@953 -- # uname 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1529709 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1529709' 00:04:36.420 killing process with pid 1529709 00:04:36.420 12:39:07 rpc -- common/autotest_common.sh@967 -- # kill 1529709 00:04:36.678 12:39:07 rpc -- common/autotest_common.sh@972 -- # wait 1529709 00:04:36.952 00:04:36.952 real 0m2.475s 00:04:36.952 user 0m3.179s 00:04:36.952 sys 0m0.686s 00:04:36.952 12:39:07 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.952 12:39:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.952 ************************************ 00:04:36.952 END TEST rpc 00:04:36.952 ************************************ 00:04:36.952 12:39:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.952 12:39:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.952 12:39:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.952 12:39:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.952 12:39:07 -- common/autotest_common.sh@10 -- # set +x 00:04:36.952 ************************************ 00:04:36.952 START TEST skip_rpc 00:04:36.952 ************************************ 00:04:36.952 12:39:07 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.952 * Looking for test storage... 00:04:36.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.952 12:39:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.952 12:39:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.952 12:39:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.952 12:39:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.952 12:39:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.952 12:39:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.952 ************************************ 00:04:36.952 START TEST skip_rpc 00:04:36.952 ************************************ 00:04:36.952 12:39:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:36.952 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1530371 00:04:36.952 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.952 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.952 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.211 [2024-07-15 12:39:07.928258] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:37.211 [2024-07-15 12:39:07.928301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530371 ] 00:04:37.211 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.211 [2024-07-15 12:39:07.980681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.211 [2024-07-15 12:39:08.053944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.487 12:39:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.487 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:42.487 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.487 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:42.487 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.487 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1530371 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1530371 ']' 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1530371 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1530371 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1530371' 00:04:42.488 killing process with pid 1530371 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1530371 00:04:42.488 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1530371 00:04:42.488 00:04:42.488 real 0m5.366s 00:04:42.488 user 0m5.131s 00:04:42.488 sys 0m0.255s 00:04:42.488 12:39:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.488 12:39:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.488 ************************************ 00:04:42.488 END TEST skip_rpc 00:04:42.488 ************************************ 00:04:42.488 12:39:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:42.488 12:39:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.488 12:39:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.488 12:39:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.488 12:39:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.488 ************************************ 00:04:42.488 START TEST skip_rpc_with_json 00:04:42.488 ************************************ 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1531682 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1531682 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1531682 ']' 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.488 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.488 [2024-07-15 12:39:13.361359] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:42.488 [2024-07-15 12:39:13.361398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531682 ] 00:04:42.488 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.488 [2024-07-15 12:39:13.425414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.746 [2024-07-15 12:39:13.505190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.313 [2024-07-15 12:39:14.159872] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.313 request: 00:04:43.313 { 00:04:43.313 "trtype": "tcp", 00:04:43.313 "method": "nvmf_get_transports", 00:04:43.313 "req_id": 1 00:04:43.313 } 00:04:43.313 Got JSON-RPC error response 00:04:43.313 response: 00:04:43.313 { 00:04:43.313 "code": -19, 00:04:43.313 "message": "No such device" 00:04:43.313 } 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.313 [2024-07-15 12:39:14.167968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.313 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.571 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.571 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.571 { 00:04:43.571 "subsystems": [ 00:04:43.571 { 00:04:43.571 "subsystem": "vfio_user_target", 00:04:43.571 "config": null 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "subsystem": "keyring", 00:04:43.571 "config": [] 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "subsystem": "iobuf", 00:04:43.571 "config": [ 00:04:43.571 { 00:04:43.571 "method": "iobuf_set_options", 00:04:43.571 "params": { 00:04:43.571 "small_pool_count": 8192, 00:04:43.571 "large_pool_count": 1024, 00:04:43.571 "small_bufsize": 8192, 00:04:43.571 "large_bufsize": 135168 00:04:43.571 } 00:04:43.571 } 00:04:43.571 ] 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "subsystem": "sock", 00:04:43.571 "config": [ 00:04:43.571 { 00:04:43.571 "method": "sock_set_default_impl", 00:04:43.571 "params": { 00:04:43.571 "impl_name": "posix" 00:04:43.571 } 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "method": "sock_impl_set_options", 00:04:43.571 "params": { 00:04:43.571 "impl_name": "ssl", 00:04:43.571 "recv_buf_size": 4096, 00:04:43.571 "send_buf_size": 4096, 00:04:43.571 "enable_recv_pipe": true, 00:04:43.571 "enable_quickack": false, 00:04:43.571 "enable_placement_id": 0, 00:04:43.571 "enable_zerocopy_send_server": true, 00:04:43.571 "enable_zerocopy_send_client": false, 00:04:43.571 "zerocopy_threshold": 0, 00:04:43.571 "tls_version": 0, 00:04:43.571 "enable_ktls": false 00:04:43.571 } 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "method": "sock_impl_set_options", 00:04:43.571 "params": { 00:04:43.571 "impl_name": "posix", 00:04:43.571 "recv_buf_size": 2097152, 00:04:43.571 "send_buf_size": 2097152, 00:04:43.571 "enable_recv_pipe": true, 00:04:43.571 "enable_quickack": false, 00:04:43.571 "enable_placement_id": 0, 00:04:43.571 "enable_zerocopy_send_server": true, 00:04:43.571 "enable_zerocopy_send_client": false, 00:04:43.571 "zerocopy_threshold": 0, 00:04:43.571 "tls_version": 0, 00:04:43.571 "enable_ktls": false 00:04:43.571 } 00:04:43.571 } 00:04:43.571 ] 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "subsystem": "vmd", 00:04:43.571 "config": [] 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "subsystem": "accel", 00:04:43.571 "config": [ 00:04:43.571 { 00:04:43.571 "method": "accel_set_options", 00:04:43.571 "params": { 00:04:43.571 "small_cache_size": 128, 00:04:43.571 "large_cache_size": 16, 00:04:43.571 "task_count": 2048, 00:04:43.571 "sequence_count": 2048, 00:04:43.571 "buf_count": 2048 00:04:43.571 } 00:04:43.571 } 00:04:43.571 ] 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "subsystem": "bdev", 00:04:43.571 "config": [ 00:04:43.571 { 00:04:43.571 "method": "bdev_set_options", 00:04:43.571 "params": { 00:04:43.571 "bdev_io_pool_size": 65535, 00:04:43.571 "bdev_io_cache_size": 256, 00:04:43.571 "bdev_auto_examine": true, 00:04:43.571 "iobuf_small_cache_size": 128, 00:04:43.571 "iobuf_large_cache_size": 16 00:04:43.571 } 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "method": "bdev_raid_set_options", 00:04:43.571 "params": { 00:04:43.571 "process_window_size_kb": 1024 00:04:43.571 } 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "method": "bdev_iscsi_set_options", 00:04:43.571 "params": { 00:04:43.571 "timeout_sec": 30 00:04:43.571 } 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "method": "bdev_nvme_set_options", 00:04:43.571 "params": { 00:04:43.571 "action_on_timeout": "none", 00:04:43.571 "timeout_us": 0, 00:04:43.571 "timeout_admin_us": 0, 00:04:43.571 "keep_alive_timeout_ms": 10000, 00:04:43.571 "arbitration_burst": 0, 00:04:43.571 "low_priority_weight": 0, 00:04:43.571 "medium_priority_weight": 0, 00:04:43.571 "high_priority_weight": 0, 00:04:43.571 "nvme_adminq_poll_period_us": 10000, 00:04:43.571 "nvme_ioq_poll_period_us": 0, 00:04:43.571 "io_queue_requests": 0, 00:04:43.571 "delay_cmd_submit": true, 00:04:43.571 "transport_retry_count": 4, 00:04:43.571 "bdev_retry_count": 3, 00:04:43.571 "transport_ack_timeout": 0, 00:04:43.571 "ctrlr_loss_timeout_sec": 0, 00:04:43.571 "reconnect_delay_sec": 0, 00:04:43.571 "fast_io_fail_timeout_sec": 0, 00:04:43.571 "disable_auto_failback": false, 00:04:43.571 "generate_uuids": false, 00:04:43.571 "transport_tos": 0, 00:04:43.571 "nvme_error_stat": false, 00:04:43.571 "rdma_srq_size": 0, 00:04:43.571 "io_path_stat": false, 00:04:43.571 "allow_accel_sequence": false, 00:04:43.571 "rdma_max_cq_size": 0, 00:04:43.571 "rdma_cm_event_timeout_ms": 0, 00:04:43.571 "dhchap_digests": [ 00:04:43.571 "sha256", 00:04:43.571 "sha384", 00:04:43.571 "sha512" 00:04:43.571 ], 00:04:43.571 "dhchap_dhgroups": [ 00:04:43.571 "null", 00:04:43.571 "ffdhe2048", 00:04:43.571 "ffdhe3072", 00:04:43.571 "ffdhe4096", 00:04:43.571 "ffdhe6144", 00:04:43.571 "ffdhe8192" 00:04:43.571 ] 00:04:43.571 } 00:04:43.571 }, 00:04:43.571 { 00:04:43.571 "method": "bdev_nvme_set_hotplug", 00:04:43.571 "params": { 00:04:43.571 "period_us": 100000, 00:04:43.571 "enable": false 00:04:43.571 } 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "method": "bdev_wait_for_examine" 00:04:43.572 } 00:04:43.572 ] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "scsi", 00:04:43.572 "config": null 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "scheduler", 00:04:43.572 "config": [ 00:04:43.572 { 00:04:43.572 "method": "framework_set_scheduler", 00:04:43.572 "params": { 00:04:43.572 "name": "static" 00:04:43.572 } 00:04:43.572 } 00:04:43.572 ] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "vhost_scsi", 00:04:43.572 "config": [] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "vhost_blk", 00:04:43.572 "config": [] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "ublk", 00:04:43.572 "config": [] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "nbd", 00:04:43.572 "config": [] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "nvmf", 00:04:43.572 "config": [ 00:04:43.572 { 00:04:43.572 "method": "nvmf_set_config", 00:04:43.572 "params": { 00:04:43.572 "discovery_filter": "match_any", 00:04:43.572 "admin_cmd_passthru": { 00:04:43.572 "identify_ctrlr": false 00:04:43.572 } 00:04:43.572 } 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "method": "nvmf_set_max_subsystems", 00:04:43.572 "params": { 00:04:43.572 "max_subsystems": 1024 00:04:43.572 } 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "method": "nvmf_set_crdt", 00:04:43.572 "params": { 00:04:43.572 "crdt1": 0, 00:04:43.572 "crdt2": 0, 00:04:43.572 "crdt3": 0 00:04:43.572 } 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "method": "nvmf_create_transport", 00:04:43.572 "params": { 00:04:43.572 "trtype": "TCP", 00:04:43.572 "max_queue_depth": 128, 00:04:43.572 "max_io_qpairs_per_ctrlr": 127, 00:04:43.572 "in_capsule_data_size": 4096, 00:04:43.572 "max_io_size": 131072, 00:04:43.572 "io_unit_size": 131072, 00:04:43.572 "max_aq_depth": 128, 00:04:43.572 "num_shared_buffers": 511, 00:04:43.572 "buf_cache_size": 4294967295, 00:04:43.572 "dif_insert_or_strip": false, 00:04:43.572 "zcopy": false, 00:04:43.572 "c2h_success": true, 00:04:43.572 "sock_priority": 0, 00:04:43.572 "abort_timeout_sec": 1, 00:04:43.572 "ack_timeout": 0, 00:04:43.572 "data_wr_pool_size": 0 00:04:43.572 } 00:04:43.572 } 00:04:43.572 ] 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "subsystem": "iscsi", 00:04:43.572 "config": [ 00:04:43.572 { 00:04:43.572 "method": "iscsi_set_options", 00:04:43.572 "params": { 00:04:43.572 "node_base": "iqn.2016-06.io.spdk", 00:04:43.572 "max_sessions": 128, 00:04:43.572 "max_connections_per_session": 2, 00:04:43.572 "max_queue_depth": 64, 00:04:43.572 "default_time2wait": 2, 00:04:43.572 "default_time2retain": 20, 00:04:43.572 "first_burst_length": 8192, 00:04:43.572 "immediate_data": true, 00:04:43.572 "allow_duplicated_isid": false, 00:04:43.572 "error_recovery_level": 0, 00:04:43.572 "nop_timeout": 60, 00:04:43.572 "nop_in_interval": 30, 00:04:43.572 "disable_chap": false, 00:04:43.572 "require_chap": false, 00:04:43.572 "mutual_chap": false, 00:04:43.572 "chap_group": 0, 00:04:43.572 "max_large_datain_per_connection": 64, 00:04:43.572 "max_r2t_per_connection": 4, 00:04:43.572 "pdu_pool_size": 36864, 00:04:43.572 "immediate_data_pool_size": 16384, 00:04:43.572 "data_out_pool_size": 2048 00:04:43.572 } 00:04:43.572 } 00:04:43.572 ] 00:04:43.572 } 00:04:43.572 ] 00:04:43.572 } 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1531682 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1531682 ']' 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1531682 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1531682 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1531682' 00:04:43.572 killing process with pid 1531682 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1531682 00:04:43.572 12:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1531682 00:04:43.830 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1531920 00:04:43.830 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.830 12:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1531920 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1531920 ']' 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1531920 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1531920 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1531920' 00:04:49.100 killing process with pid 1531920 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1531920 00:04:49.100 12:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1531920 00:04:49.100 12:39:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.100 12:39:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.100 00:04:49.100 real 0m6.733s 00:04:49.100 user 0m6.544s 00:04:49.100 sys 0m0.605s 00:04:49.100 12:39:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.100 12:39:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.100 ************************************ 00:04:49.100 END TEST skip_rpc_with_json 00:04:49.100 ************************************ 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.359 12:39:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.359 ************************************ 00:04:49.359 START TEST skip_rpc_with_delay 00:04:49.359 ************************************ 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.359 [2024-07-15 12:39:20.166812] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.359 [2024-07-15 12:39:20.166875] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:49.359 00:04:49.359 real 0m0.066s 00:04:49.359 user 0m0.035s 00:04:49.359 sys 0m0.030s 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.359 12:39:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.359 ************************************ 00:04:49.359 END TEST skip_rpc_with_delay 00:04:49.359 ************************************ 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:49.359 12:39:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.359 12:39:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.359 12:39:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.359 12:39:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.359 ************************************ 00:04:49.359 START TEST exit_on_failed_rpc_init 00:04:49.359 ************************************ 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1532895 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1532895 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1532895 ']' 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.359 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.359 [2024-07-15 12:39:20.298180] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:49.359 [2024-07-15 12:39:20.298221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532895 ] 00:04:49.618 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.618 [2024-07-15 12:39:20.366420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.618 [2024-07-15 12:39:20.444969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.184 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.442 [2024-07-15 12:39:21.154076] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:50.442 [2024-07-15 12:39:21.154125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533126 ] 00:04:50.442 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.442 [2024-07-15 12:39:21.219339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.442 [2024-07-15 12:39:21.291932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.442 [2024-07-15 12:39:21.291998] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.442 [2024-07-15 12:39:21.292006] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.442 [2024-07-15 12:39:21.292013] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1532895 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1532895 ']' 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1532895 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.442 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1532895 00:04:50.701 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.701 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.701 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1532895' 00:04:50.701 killing process with pid 1532895 00:04:50.701 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1532895 00:04:50.701 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1532895 00:04:50.959 00:04:50.959 real 0m1.466s 00:04:50.959 user 0m1.684s 00:04:50.959 sys 0m0.415s 00:04:50.959 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.959 12:39:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.959 ************************************ 00:04:50.959 END TEST exit_on_failed_rpc_init 00:04:50.959 ************************************ 00:04:50.959 12:39:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.959 12:39:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.959 00:04:50.959 real 0m13.994s 00:04:50.959 user 0m13.530s 00:04:50.959 sys 0m1.557s 00:04:50.959 12:39:21 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.959 12:39:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.959 ************************************ 00:04:50.959 END TEST skip_rpc 00:04:50.959 ************************************ 00:04:50.959 12:39:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.959 12:39:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.959 12:39:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.959 12:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.959 12:39:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.959 ************************************ 00:04:50.959 START TEST rpc_client 00:04:50.959 ************************************ 00:04:50.960 12:39:21 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.960 * Looking for test storage... 00:04:50.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:50.960 12:39:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.219 OK 00:04:51.219 12:39:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.219 00:04:51.219 real 0m0.114s 00:04:51.219 user 0m0.049s 00:04:51.219 sys 0m0.073s 00:04:51.219 12:39:21 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.219 12:39:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.219 ************************************ 00:04:51.219 END TEST rpc_client 00:04:51.219 ************************************ 00:04:51.219 12:39:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.219 12:39:21 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.219 12:39:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.219 12:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.219 12:39:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.219 ************************************ 00:04:51.219 START TEST json_config 00:04:51.219 ************************************ 00:04:51.219 12:39:21 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.219 12:39:22 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.219 12:39:22 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.219 12:39:22 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.219 12:39:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.219 12:39:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.219 12:39:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.219 12:39:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.219 12:39:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@47 -- # : 0 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.219 12:39:22 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:51.219 INFO: JSON configuration test init 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:51.219 12:39:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.219 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:51.219 12:39:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.219 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.219 12:39:22 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.219 12:39:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.219 12:39:22 json_config -- json_config/common.sh@10 -- # shift 00:04:51.219 12:39:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.219 12:39:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.219 12:39:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.219 12:39:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.219 12:39:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.220 12:39:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1533351 00:04:51.220 12:39:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.220 Waiting for target to run... 00:04:51.220 12:39:22 json_config -- json_config/common.sh@25 -- # waitforlisten 1533351 /var/tmp/spdk_tgt.sock 00:04:51.220 12:39:22 json_config -- common/autotest_common.sh@829 -- # '[' -z 1533351 ']' 00:04:51.220 12:39:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.220 12:39:22 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.220 12:39:22 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.220 12:39:22 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.220 12:39:22 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.220 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.220 [2024-07-15 12:39:22.157338] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:51.220 [2024-07-15 12:39:22.157392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533351 ] 00:04:51.477 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.735 [2024-07-15 12:39:22.433474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.735 [2024-07-15 12:39:22.501053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.302 12:39:22 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.302 12:39:22 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:52.302 12:39:22 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.302 00:04:52.302 12:39:22 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:52.302 12:39:22 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:52.302 12:39:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.302 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.302 12:39:22 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:52.302 12:39:22 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:52.302 12:39:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:52.302 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.302 12:39:22 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.302 12:39:22 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:52.302 12:39:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:55.583 12:39:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.583 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:55.583 12:39:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:55.583 12:39:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.583 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:55.583 12:39:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.583 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.583 12:39:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.583 MallocForNvmf0 00:04:55.583 12:39:26 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.583 12:39:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.841 MallocForNvmf1 00:04:55.841 12:39:26 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.841 12:39:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.841 [2024-07-15 12:39:26.784917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.100 12:39:26 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.100 12:39:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.100 12:39:26 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.100 12:39:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.359 12:39:27 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.359 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.359 12:39:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.359 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.618 [2024-07-15 12:39:27.451018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.618 12:39:27 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:56.618 12:39:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.618 12:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.618 12:39:27 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:56.618 12:39:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.618 12:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.618 12:39:27 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:56.618 12:39:27 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:56.618 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:56.877 MallocBdevForConfigChangeCheck 00:04:56.877 12:39:27 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:56.877 12:39:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.877 12:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.877 12:39:27 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:56.877 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.135 12:39:28 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:57.135 INFO: shutting down applications... 00:04:57.135 12:39:28 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:57.135 12:39:28 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:57.135 12:39:28 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:57.135 12:39:28 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:59.039 Calling clear_iscsi_subsystem 00:04:59.039 Calling clear_nvmf_subsystem 00:04:59.039 Calling clear_nbd_subsystem 00:04:59.039 Calling clear_ublk_subsystem 00:04:59.039 Calling clear_vhost_blk_subsystem 00:04:59.039 Calling clear_vhost_scsi_subsystem 00:04:59.039 Calling clear_bdev_subsystem 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@345 -- # break 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:59.039 12:39:29 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:59.039 12:39:29 json_config -- json_config/common.sh@31 -- # local app=target 00:04:59.039 12:39:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.039 12:39:29 json_config -- json_config/common.sh@35 -- # [[ -n 1533351 ]] 00:04:59.039 12:39:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1533351 00:04:59.039 12:39:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.039 12:39:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.039 12:39:29 json_config -- json_config/common.sh@41 -- # kill -0 1533351 00:04:59.039 12:39:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.608 12:39:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.608 12:39:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.608 12:39:30 json_config -- json_config/common.sh@41 -- # kill -0 1533351 00:04:59.608 12:39:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.608 12:39:30 json_config -- json_config/common.sh@43 -- # break 00:04:59.608 12:39:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.608 12:39:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.608 SPDK target shutdown done 00:04:59.608 12:39:30 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:59.608 INFO: relaunching applications... 00:04:59.608 12:39:30 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.608 12:39:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.608 12:39:30 json_config -- json_config/common.sh@10 -- # shift 00:04:59.608 12:39:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.608 12:39:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.608 12:39:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.608 12:39:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.608 12:39:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.608 12:39:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1534941 00:04:59.608 12:39:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.608 Waiting for target to run... 00:04:59.608 12:39:30 json_config -- json_config/common.sh@25 -- # waitforlisten 1534941 /var/tmp/spdk_tgt.sock 00:04:59.608 12:39:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.608 12:39:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 1534941 ']' 00:04:59.608 12:39:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.608 12:39:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.608 12:39:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.608 12:39:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.608 12:39:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 [2024-07-15 12:39:30.540355] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:59.608 [2024-07-15 12:39:30.540414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534941 ] 00:04:59.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.171 [2024-07-15 12:39:30.844439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.171 [2024-07-15 12:39:30.914459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.457 [2024-07-15 12:39:33.927191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.457 [2024-07-15 12:39:33.959520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:03.457 12:39:33 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.457 12:39:33 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:03.457 12:39:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:03.457 00:05:03.457 12:39:33 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:03.457 12:39:33 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:03.457 INFO: Checking if target configuration is the same... 00:05:03.457 12:39:33 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.457 12:39:33 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:03.457 12:39:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.457 + '[' 2 -ne 2 ']' 00:05:03.457 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:03.457 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:03.457 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:03.457 +++ basename /dev/fd/62 00:05:03.457 ++ mktemp /tmp/62.XXX 00:05:03.457 + tmp_file_1=/tmp/62.IlK 00:05:03.457 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.457 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:03.457 + tmp_file_2=/tmp/spdk_tgt_config.json.784 00:05:03.457 + ret=0 00:05:03.457 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.457 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.457 + diff -u /tmp/62.IlK /tmp/spdk_tgt_config.json.784 00:05:03.457 + echo 'INFO: JSON config files are the same' 00:05:03.457 INFO: JSON config files are the same 00:05:03.457 + rm /tmp/62.IlK /tmp/spdk_tgt_config.json.784 00:05:03.457 + exit 0 00:05:03.457 12:39:34 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:03.457 12:39:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:03.457 INFO: changing configuration and checking if this can be detected... 00:05:03.457 12:39:34 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.457 12:39:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.715 12:39:34 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.715 12:39:34 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:03.715 12:39:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.715 + '[' 2 -ne 2 ']' 00:05:03.715 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:03.715 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:03.715 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:03.715 +++ basename /dev/fd/62 00:05:03.715 ++ mktemp /tmp/62.XXX 00:05:03.715 + tmp_file_1=/tmp/62.p0w 00:05:03.715 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.715 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:03.715 + tmp_file_2=/tmp/spdk_tgt_config.json.Fa7 00:05:03.715 + ret=0 00:05:03.715 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.974 + diff -u /tmp/62.p0w /tmp/spdk_tgt_config.json.Fa7 00:05:03.974 + ret=1 00:05:03.974 + echo '=== Start of file: /tmp/62.p0w ===' 00:05:03.974 + cat /tmp/62.p0w 00:05:03.974 + echo '=== End of file: /tmp/62.p0w ===' 00:05:03.974 + echo '' 00:05:03.974 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Fa7 ===' 00:05:03.974 + cat /tmp/spdk_tgt_config.json.Fa7 00:05:03.974 + echo '=== End of file: /tmp/spdk_tgt_config.json.Fa7 ===' 00:05:03.974 + echo '' 00:05:03.974 + rm /tmp/62.p0w /tmp/spdk_tgt_config.json.Fa7 00:05:03.974 + exit 1 00:05:03.974 12:39:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:03.974 INFO: configuration change detected. 00:05:03.974 12:39:34 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:03.974 12:39:34 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:03.974 12:39:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.974 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@317 -- # [[ -n 1534941 ]] 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:04.232 12:39:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.232 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:04.232 12:39:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.232 12:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.232 12:39:34 json_config -- json_config/json_config.sh@323 -- # killprocess 1534941 00:05:04.232 12:39:34 json_config -- common/autotest_common.sh@948 -- # '[' -z 1534941 ']' 00:05:04.232 12:39:34 json_config -- common/autotest_common.sh@952 -- # kill -0 1534941 00:05:04.233 12:39:34 json_config -- common/autotest_common.sh@953 -- # uname 00:05:04.233 12:39:34 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.233 12:39:34 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1534941 00:05:04.233 12:39:35 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.233 12:39:35 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.233 12:39:35 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1534941' 00:05:04.233 killing process with pid 1534941 00:05:04.233 12:39:35 json_config -- common/autotest_common.sh@967 -- # kill 1534941 00:05:04.233 12:39:35 json_config -- common/autotest_common.sh@972 -- # wait 1534941 00:05:05.609 12:39:36 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.609 12:39:36 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:05.609 12:39:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.609 12:39:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.609 12:39:36 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:05.609 12:39:36 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:05.609 INFO: Success 00:05:05.609 00:05:05.609 real 0m14.541s 00:05:05.609 user 0m15.403s 00:05:05.609 sys 0m1.699s 00:05:05.609 12:39:36 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.609 12:39:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.609 ************************************ 00:05:05.609 END TEST json_config 00:05:05.609 ************************************ 00:05:05.868 12:39:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.868 12:39:36 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.868 12:39:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.868 12:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.868 12:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.868 ************************************ 00:05:05.868 START TEST json_config_extra_key 00:05:05.868 ************************************ 00:05:05.868 12:39:36 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.868 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.868 12:39:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.868 12:39:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.868 12:39:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.868 12:39:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.868 12:39:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.868 12:39:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.868 12:39:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:05.868 12:39:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.868 12:39:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.869 12:39:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:05.869 12:39:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:05.869 12:39:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:05.869 INFO: launching applications... 00:05:05.869 12:39:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1536014 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.869 Waiting for target to run... 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1536014 /var/tmp/spdk_tgt.sock 00:05:05.869 12:39:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.869 12:39:36 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1536014 ']' 00:05:05.869 12:39:36 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.869 12:39:36 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.869 12:39:36 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.869 12:39:36 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.869 12:39:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.869 [2024-07-15 12:39:36.749835] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:05.869 [2024-07-15 12:39:36.749888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536014 ] 00:05:05.869 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.127 [2024-07-15 12:39:37.034317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.386 [2024-07-15 12:39:37.103218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.645 12:39:37 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.645 12:39:37 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:06.645 00:05:06.645 12:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:06.645 INFO: shutting down applications... 00:05:06.645 12:39:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1536014 ]] 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1536014 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1536014 00:05:06.645 12:39:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1536014 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.212 12:39:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.212 SPDK target shutdown done 00:05:07.212 12:39:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.212 Success 00:05:07.212 00:05:07.212 real 0m1.445s 00:05:07.212 user 0m1.195s 00:05:07.212 sys 0m0.396s 00:05:07.212 12:39:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.212 12:39:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.212 ************************************ 00:05:07.212 END TEST json_config_extra_key 00:05:07.212 ************************************ 00:05:07.212 12:39:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.212 12:39:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.212 12:39:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.212 12:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.212 12:39:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.212 ************************************ 00:05:07.212 START TEST alias_rpc 00:05:07.212 ************************************ 00:05:07.212 12:39:38 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.471 * Looking for test storage... 00:05:07.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:07.471 12:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.471 12:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1536299 00:05:07.471 12:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1536299 00:05:07.471 12:39:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.471 12:39:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1536299 ']' 00:05:07.471 12:39:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.471 12:39:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.471 12:39:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.471 12:39:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.471 12:39:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.471 [2024-07-15 12:39:38.263729] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:07.471 [2024-07-15 12:39:38.263781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536299 ] 00:05:07.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.471 [2024-07-15 12:39:38.329173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.471 [2024-07-15 12:39:38.408380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:08.407 12:39:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:08.407 12:39:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1536299 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1536299 ']' 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1536299 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1536299 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.407 12:39:39 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.408 12:39:39 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1536299' 00:05:08.408 killing process with pid 1536299 00:05:08.408 12:39:39 alias_rpc -- common/autotest_common.sh@967 -- # kill 1536299 00:05:08.408 12:39:39 alias_rpc -- common/autotest_common.sh@972 -- # wait 1536299 00:05:08.666 00:05:08.666 real 0m1.486s 00:05:08.666 user 0m1.614s 00:05:08.666 sys 0m0.409s 00:05:08.666 12:39:39 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.666 12:39:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.666 ************************************ 00:05:08.666 END TEST alias_rpc 00:05:08.666 ************************************ 00:05:08.928 12:39:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.928 12:39:39 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:08.928 12:39:39 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.928 12:39:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.928 12:39:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.928 12:39:39 -- common/autotest_common.sh@10 -- # set +x 00:05:08.928 ************************************ 00:05:08.928 START TEST spdkcli_tcp 00:05:08.928 ************************************ 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.928 * Looking for test storage... 00:05:08.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1536589 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1536589 00:05:08.928 12:39:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1536589 ']' 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.928 12:39:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.928 [2024-07-15 12:39:39.822780] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:08.928 [2024-07-15 12:39:39.822829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536589 ] 00:05:08.928 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.186 [2024-07-15 12:39:39.891902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.186 [2024-07-15 12:39:39.971185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.186 [2024-07-15 12:39:39.971187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.752 12:39:40 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.753 12:39:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:09.753 12:39:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1536817 00:05:09.753 12:39:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.753 12:39:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.011 [ 00:05:10.011 "bdev_malloc_delete", 00:05:10.011 "bdev_malloc_create", 00:05:10.011 "bdev_null_resize", 00:05:10.011 "bdev_null_delete", 00:05:10.011 "bdev_null_create", 00:05:10.011 "bdev_nvme_cuse_unregister", 00:05:10.011 "bdev_nvme_cuse_register", 00:05:10.011 "bdev_opal_new_user", 00:05:10.012 "bdev_opal_set_lock_state", 00:05:10.012 "bdev_opal_delete", 00:05:10.012 "bdev_opal_get_info", 00:05:10.012 "bdev_opal_create", 00:05:10.012 "bdev_nvme_opal_revert", 00:05:10.012 "bdev_nvme_opal_init", 00:05:10.012 "bdev_nvme_send_cmd", 00:05:10.012 "bdev_nvme_get_path_iostat", 00:05:10.012 "bdev_nvme_get_mdns_discovery_info", 00:05:10.012 "bdev_nvme_stop_mdns_discovery", 00:05:10.012 "bdev_nvme_start_mdns_discovery", 00:05:10.012 "bdev_nvme_set_multipath_policy", 00:05:10.012 "bdev_nvme_set_preferred_path", 00:05:10.012 "bdev_nvme_get_io_paths", 00:05:10.012 "bdev_nvme_remove_error_injection", 00:05:10.012 "bdev_nvme_add_error_injection", 00:05:10.012 "bdev_nvme_get_discovery_info", 00:05:10.012 "bdev_nvme_stop_discovery", 00:05:10.012 "bdev_nvme_start_discovery", 00:05:10.012 "bdev_nvme_get_controller_health_info", 00:05:10.012 "bdev_nvme_disable_controller", 00:05:10.012 "bdev_nvme_enable_controller", 00:05:10.012 "bdev_nvme_reset_controller", 00:05:10.012 "bdev_nvme_get_transport_statistics", 00:05:10.012 "bdev_nvme_apply_firmware", 00:05:10.012 "bdev_nvme_detach_controller", 00:05:10.012 "bdev_nvme_get_controllers", 00:05:10.012 "bdev_nvme_attach_controller", 00:05:10.012 "bdev_nvme_set_hotplug", 00:05:10.012 "bdev_nvme_set_options", 00:05:10.012 "bdev_passthru_delete", 00:05:10.012 "bdev_passthru_create", 00:05:10.012 "bdev_lvol_set_parent_bdev", 00:05:10.012 "bdev_lvol_set_parent", 00:05:10.012 "bdev_lvol_check_shallow_copy", 00:05:10.012 "bdev_lvol_start_shallow_copy", 00:05:10.012 "bdev_lvol_grow_lvstore", 00:05:10.012 "bdev_lvol_get_lvols", 00:05:10.012 "bdev_lvol_get_lvstores", 00:05:10.012 "bdev_lvol_delete", 00:05:10.012 "bdev_lvol_set_read_only", 00:05:10.012 "bdev_lvol_resize", 00:05:10.012 "bdev_lvol_decouple_parent", 00:05:10.012 "bdev_lvol_inflate", 00:05:10.012 "bdev_lvol_rename", 00:05:10.012 "bdev_lvol_clone_bdev", 00:05:10.012 "bdev_lvol_clone", 00:05:10.012 "bdev_lvol_snapshot", 00:05:10.012 "bdev_lvol_create", 00:05:10.012 "bdev_lvol_delete_lvstore", 00:05:10.012 "bdev_lvol_rename_lvstore", 00:05:10.012 "bdev_lvol_create_lvstore", 00:05:10.012 "bdev_raid_set_options", 00:05:10.012 "bdev_raid_remove_base_bdev", 00:05:10.012 "bdev_raid_add_base_bdev", 00:05:10.012 "bdev_raid_delete", 00:05:10.012 "bdev_raid_create", 00:05:10.012 "bdev_raid_get_bdevs", 00:05:10.012 "bdev_error_inject_error", 00:05:10.012 "bdev_error_delete", 00:05:10.012 "bdev_error_create", 00:05:10.012 "bdev_split_delete", 00:05:10.012 "bdev_split_create", 00:05:10.012 "bdev_delay_delete", 00:05:10.012 "bdev_delay_create", 00:05:10.012 "bdev_delay_update_latency", 00:05:10.012 "bdev_zone_block_delete", 00:05:10.012 "bdev_zone_block_create", 00:05:10.012 "blobfs_create", 00:05:10.012 "blobfs_detect", 00:05:10.012 "blobfs_set_cache_size", 00:05:10.012 "bdev_aio_delete", 00:05:10.012 "bdev_aio_rescan", 00:05:10.012 "bdev_aio_create", 00:05:10.012 "bdev_ftl_set_property", 00:05:10.012 "bdev_ftl_get_properties", 00:05:10.012 "bdev_ftl_get_stats", 00:05:10.012 "bdev_ftl_unmap", 00:05:10.012 "bdev_ftl_unload", 00:05:10.012 "bdev_ftl_delete", 00:05:10.012 "bdev_ftl_load", 00:05:10.012 "bdev_ftl_create", 00:05:10.012 "bdev_virtio_attach_controller", 00:05:10.012 "bdev_virtio_scsi_get_devices", 00:05:10.012 "bdev_virtio_detach_controller", 00:05:10.012 "bdev_virtio_blk_set_hotplug", 00:05:10.012 "bdev_iscsi_delete", 00:05:10.012 "bdev_iscsi_create", 00:05:10.012 "bdev_iscsi_set_options", 00:05:10.012 "accel_error_inject_error", 00:05:10.012 "ioat_scan_accel_module", 00:05:10.012 "dsa_scan_accel_module", 00:05:10.012 "iaa_scan_accel_module", 00:05:10.012 "vfu_virtio_create_scsi_endpoint", 00:05:10.012 "vfu_virtio_scsi_remove_target", 00:05:10.012 "vfu_virtio_scsi_add_target", 00:05:10.012 "vfu_virtio_create_blk_endpoint", 00:05:10.012 "vfu_virtio_delete_endpoint", 00:05:10.012 "keyring_file_remove_key", 00:05:10.012 "keyring_file_add_key", 00:05:10.012 "keyring_linux_set_options", 00:05:10.012 "iscsi_get_histogram", 00:05:10.012 "iscsi_enable_histogram", 00:05:10.012 "iscsi_set_options", 00:05:10.012 "iscsi_get_auth_groups", 00:05:10.012 "iscsi_auth_group_remove_secret", 00:05:10.012 "iscsi_auth_group_add_secret", 00:05:10.012 "iscsi_delete_auth_group", 00:05:10.012 "iscsi_create_auth_group", 00:05:10.012 "iscsi_set_discovery_auth", 00:05:10.012 "iscsi_get_options", 00:05:10.012 "iscsi_target_node_request_logout", 00:05:10.012 "iscsi_target_node_set_redirect", 00:05:10.012 "iscsi_target_node_set_auth", 00:05:10.012 "iscsi_target_node_add_lun", 00:05:10.012 "iscsi_get_stats", 00:05:10.012 "iscsi_get_connections", 00:05:10.012 "iscsi_portal_group_set_auth", 00:05:10.012 "iscsi_start_portal_group", 00:05:10.012 "iscsi_delete_portal_group", 00:05:10.012 "iscsi_create_portal_group", 00:05:10.012 "iscsi_get_portal_groups", 00:05:10.012 "iscsi_delete_target_node", 00:05:10.012 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.012 "iscsi_target_node_add_pg_ig_maps", 00:05:10.012 "iscsi_create_target_node", 00:05:10.012 "iscsi_get_target_nodes", 00:05:10.012 "iscsi_delete_initiator_group", 00:05:10.012 "iscsi_initiator_group_remove_initiators", 00:05:10.012 "iscsi_initiator_group_add_initiators", 00:05:10.012 "iscsi_create_initiator_group", 00:05:10.012 "iscsi_get_initiator_groups", 00:05:10.012 "nvmf_set_crdt", 00:05:10.012 "nvmf_set_config", 00:05:10.012 "nvmf_set_max_subsystems", 00:05:10.012 "nvmf_stop_mdns_prr", 00:05:10.012 "nvmf_publish_mdns_prr", 00:05:10.012 "nvmf_subsystem_get_listeners", 00:05:10.012 "nvmf_subsystem_get_qpairs", 00:05:10.012 "nvmf_subsystem_get_controllers", 00:05:10.012 "nvmf_get_stats", 00:05:10.012 "nvmf_get_transports", 00:05:10.012 "nvmf_create_transport", 00:05:10.012 "nvmf_get_targets", 00:05:10.012 "nvmf_delete_target", 00:05:10.012 "nvmf_create_target", 00:05:10.012 "nvmf_subsystem_allow_any_host", 00:05:10.012 "nvmf_subsystem_remove_host", 00:05:10.012 "nvmf_subsystem_add_host", 00:05:10.012 "nvmf_ns_remove_host", 00:05:10.012 "nvmf_ns_add_host", 00:05:10.012 "nvmf_subsystem_remove_ns", 00:05:10.012 "nvmf_subsystem_add_ns", 00:05:10.012 "nvmf_subsystem_listener_set_ana_state", 00:05:10.012 "nvmf_discovery_get_referrals", 00:05:10.012 "nvmf_discovery_remove_referral", 00:05:10.012 "nvmf_discovery_add_referral", 00:05:10.012 "nvmf_subsystem_remove_listener", 00:05:10.012 "nvmf_subsystem_add_listener", 00:05:10.012 "nvmf_delete_subsystem", 00:05:10.012 "nvmf_create_subsystem", 00:05:10.012 "nvmf_get_subsystems", 00:05:10.012 "env_dpdk_get_mem_stats", 00:05:10.012 "nbd_get_disks", 00:05:10.012 "nbd_stop_disk", 00:05:10.012 "nbd_start_disk", 00:05:10.012 "ublk_recover_disk", 00:05:10.012 "ublk_get_disks", 00:05:10.012 "ublk_stop_disk", 00:05:10.012 "ublk_start_disk", 00:05:10.012 "ublk_destroy_target", 00:05:10.012 "ublk_create_target", 00:05:10.012 "virtio_blk_create_transport", 00:05:10.012 "virtio_blk_get_transports", 00:05:10.012 "vhost_controller_set_coalescing", 00:05:10.012 "vhost_get_controllers", 00:05:10.012 "vhost_delete_controller", 00:05:10.012 "vhost_create_blk_controller", 00:05:10.012 "vhost_scsi_controller_remove_target", 00:05:10.012 "vhost_scsi_controller_add_target", 00:05:10.012 "vhost_start_scsi_controller", 00:05:10.012 "vhost_create_scsi_controller", 00:05:10.012 "thread_set_cpumask", 00:05:10.012 "framework_get_governor", 00:05:10.012 "framework_get_scheduler", 00:05:10.012 "framework_set_scheduler", 00:05:10.012 "framework_get_reactors", 00:05:10.012 "thread_get_io_channels", 00:05:10.012 "thread_get_pollers", 00:05:10.012 "thread_get_stats", 00:05:10.012 "framework_monitor_context_switch", 00:05:10.012 "spdk_kill_instance", 00:05:10.012 "log_enable_timestamps", 00:05:10.012 "log_get_flags", 00:05:10.012 "log_clear_flag", 00:05:10.012 "log_set_flag", 00:05:10.012 "log_get_level", 00:05:10.012 "log_set_level", 00:05:10.012 "log_get_print_level", 00:05:10.012 "log_set_print_level", 00:05:10.012 "framework_enable_cpumask_locks", 00:05:10.012 "framework_disable_cpumask_locks", 00:05:10.012 "framework_wait_init", 00:05:10.012 "framework_start_init", 00:05:10.012 "scsi_get_devices", 00:05:10.012 "bdev_get_histogram", 00:05:10.012 "bdev_enable_histogram", 00:05:10.012 "bdev_set_qos_limit", 00:05:10.012 "bdev_set_qd_sampling_period", 00:05:10.012 "bdev_get_bdevs", 00:05:10.012 "bdev_reset_iostat", 00:05:10.012 "bdev_get_iostat", 00:05:10.012 "bdev_examine", 00:05:10.012 "bdev_wait_for_examine", 00:05:10.012 "bdev_set_options", 00:05:10.012 "notify_get_notifications", 00:05:10.012 "notify_get_types", 00:05:10.012 "accel_get_stats", 00:05:10.012 "accel_set_options", 00:05:10.012 "accel_set_driver", 00:05:10.012 "accel_crypto_key_destroy", 00:05:10.012 "accel_crypto_keys_get", 00:05:10.012 "accel_crypto_key_create", 00:05:10.012 "accel_assign_opc", 00:05:10.012 "accel_get_module_info", 00:05:10.012 "accel_get_opc_assignments", 00:05:10.012 "vmd_rescan", 00:05:10.012 "vmd_remove_device", 00:05:10.012 "vmd_enable", 00:05:10.012 "sock_get_default_impl", 00:05:10.012 "sock_set_default_impl", 00:05:10.012 "sock_impl_set_options", 00:05:10.012 "sock_impl_get_options", 00:05:10.012 "iobuf_get_stats", 00:05:10.012 "iobuf_set_options", 00:05:10.012 "keyring_get_keys", 00:05:10.012 "framework_get_pci_devices", 00:05:10.012 "framework_get_config", 00:05:10.012 "framework_get_subsystems", 00:05:10.012 "vfu_tgt_set_base_path", 00:05:10.012 "trace_get_info", 00:05:10.012 "trace_get_tpoint_group_mask", 00:05:10.012 "trace_disable_tpoint_group", 00:05:10.012 "trace_enable_tpoint_group", 00:05:10.012 "trace_clear_tpoint_mask", 00:05:10.012 "trace_set_tpoint_mask", 00:05:10.012 "spdk_get_version", 00:05:10.012 "rpc_get_methods" 00:05:10.012 ] 00:05:10.012 12:39:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.012 12:39:40 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.013 12:39:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.013 12:39:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1536589 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1536589 ']' 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1536589 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1536589 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1536589' 00:05:10.013 killing process with pid 1536589 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1536589 00:05:10.013 12:39:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1536589 00:05:10.270 00:05:10.270 real 0m1.523s 00:05:10.270 user 0m2.827s 00:05:10.270 sys 0m0.435s 00:05:10.270 12:39:41 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.270 12:39:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.270 ************************************ 00:05:10.270 END TEST spdkcli_tcp 00:05:10.270 ************************************ 00:05:10.530 12:39:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.530 12:39:41 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.530 12:39:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.530 12:39:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.530 12:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:10.530 ************************************ 00:05:10.530 START TEST dpdk_mem_utility 00:05:10.530 ************************************ 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.530 * Looking for test storage... 00:05:10.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:10.530 12:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:10.530 12:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1537004 00:05:10.530 12:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.530 12:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1537004 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1537004 ']' 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.530 12:39:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.530 [2024-07-15 12:39:41.400528] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:10.530 [2024-07-15 12:39:41.400589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537004 ] 00:05:10.530 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.530 [2024-07-15 12:39:41.468064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.788 [2024-07-15 12:39:41.543770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.355 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.355 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:11.355 12:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.355 12:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.355 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.355 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.355 { 00:05:11.355 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.355 } 00:05:11.355 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.355 12:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.355 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:11.355 1 heaps totaling size 814.000000 MiB 00:05:11.355 size: 814.000000 MiB heap id: 0 00:05:11.355 end heaps---------- 00:05:11.355 8 mempools totaling size 598.116089 MiB 00:05:11.355 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.355 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.355 size: 84.521057 MiB name: bdev_io_1537004 00:05:11.355 size: 51.011292 MiB name: evtpool_1537004 00:05:11.355 size: 50.003479 MiB name: msgpool_1537004 00:05:11.355 size: 21.763794 MiB name: PDU_Pool 00:05:11.355 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.355 size: 0.026123 MiB name: Session_Pool 00:05:11.355 end mempools------- 00:05:11.355 6 memzones totaling size 4.142822 MiB 00:05:11.355 size: 1.000366 MiB name: RG_ring_0_1537004 00:05:11.355 size: 1.000366 MiB name: RG_ring_1_1537004 00:05:11.355 size: 1.000366 MiB name: RG_ring_4_1537004 00:05:11.355 size: 1.000366 MiB name: RG_ring_5_1537004 00:05:11.355 size: 0.125366 MiB name: RG_ring_2_1537004 00:05:11.355 size: 0.015991 MiB name: RG_ring_3_1537004 00:05:11.355 end memzones------- 00:05:11.355 12:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.614 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:11.614 list of free elements. size: 12.519348 MiB 00:05:11.614 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:11.614 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:11.614 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:11.614 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:11.614 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:11.614 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:11.614 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:11.614 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:11.614 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:11.614 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:11.614 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:11.614 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:11.614 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:11.614 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:11.614 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:11.614 list of standard malloc elements. size: 199.218079 MiB 00:05:11.614 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:11.614 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:11.614 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.614 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:11.614 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:11.614 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.614 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:11.614 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.614 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:11.614 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:11.614 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:11.614 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:11.614 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:11.614 list of memzone associated elements. size: 602.262573 MiB 00:05:11.614 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:11.614 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.614 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:11.614 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.614 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:11.614 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1537004_0 00:05:11.614 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:11.614 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1537004_0 00:05:11.614 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:11.614 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1537004_0 00:05:11.614 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:11.614 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.614 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:11.614 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.614 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:11.614 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1537004 00:05:11.614 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:11.614 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1537004 00:05:11.614 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.614 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1537004 00:05:11.614 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:11.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.614 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:11.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.614 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:11.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.614 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:11.614 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.614 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:11.614 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1537004 00:05:11.614 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:11.614 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1537004 00:05:11.614 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:11.614 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1537004 00:05:11.614 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:11.614 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1537004 00:05:11.614 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:11.614 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1537004 00:05:11.614 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:11.614 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.614 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:11.614 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.614 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:11.614 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.614 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:11.614 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1537004 00:05:11.614 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:11.614 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.614 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:11.614 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.615 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:11.615 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1537004 00:05:11.615 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:11.615 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.615 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:11.615 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1537004 00:05:11.615 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:11.615 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1537004 00:05:11.615 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:11.615 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.615 12:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.615 12:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1537004 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1537004 ']' 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1537004 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1537004 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1537004' 00:05:11.615 killing process with pid 1537004 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1537004 00:05:11.615 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1537004 00:05:11.873 00:05:11.873 real 0m1.412s 00:05:11.873 user 0m1.478s 00:05:11.873 sys 0m0.417s 00:05:11.873 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.873 12:39:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.873 ************************************ 00:05:11.873 END TEST dpdk_mem_utility 00:05:11.873 ************************************ 00:05:11.873 12:39:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.873 12:39:42 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.873 12:39:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.873 12:39:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.873 12:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.873 ************************************ 00:05:11.873 START TEST event 00:05:11.873 ************************************ 00:05:11.873 12:39:42 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.873 * Looking for test storage... 00:05:12.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:12.131 12:39:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:12.131 12:39:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.131 12:39:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.132 12:39:42 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:12.132 12:39:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.132 12:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.132 ************************************ 00:05:12.132 START TEST event_perf 00:05:12.132 ************************************ 00:05:12.132 12:39:42 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.132 Running I/O for 1 seconds...[2024-07-15 12:39:42.888018] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:12.132 [2024-07-15 12:39:42.888085] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537382 ] 00:05:12.132 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.132 [2024-07-15 12:39:42.960035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.132 [2024-07-15 12:39:43.034493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.132 [2024-07-15 12:39:43.034598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.132 [2024-07-15 12:39:43.034695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.132 [2024-07-15 12:39:43.034696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.505 Running I/O for 1 seconds... 00:05:13.505 lcore 0: 208261 00:05:13.505 lcore 1: 208262 00:05:13.505 lcore 2: 208261 00:05:13.505 lcore 3: 208261 00:05:13.505 done. 00:05:13.505 00:05:13.505 real 0m1.238s 00:05:13.505 user 0m4.143s 00:05:13.505 sys 0m0.093s 00:05:13.505 12:39:44 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.505 12:39:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.505 ************************************ 00:05:13.505 END TEST event_perf 00:05:13.505 ************************************ 00:05:13.505 12:39:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:13.505 12:39:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.505 12:39:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:13.505 12:39:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.505 12:39:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.505 ************************************ 00:05:13.505 START TEST event_reactor 00:05:13.505 ************************************ 00:05:13.505 12:39:44 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.505 [2024-07-15 12:39:44.193099] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:13.505 [2024-07-15 12:39:44.193168] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537591 ] 00:05:13.505 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.505 [2024-07-15 12:39:44.264529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.505 [2024-07-15 12:39:44.337655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.912 test_start 00:05:14.912 oneshot 00:05:14.912 tick 100 00:05:14.912 tick 100 00:05:14.912 tick 250 00:05:14.912 tick 100 00:05:14.912 tick 100 00:05:14.912 tick 250 00:05:14.912 tick 100 00:05:14.912 tick 500 00:05:14.912 tick 100 00:05:14.912 tick 100 00:05:14.912 tick 250 00:05:14.912 tick 100 00:05:14.912 tick 100 00:05:14.912 test_end 00:05:14.912 00:05:14.912 real 0m1.234s 00:05:14.912 user 0m1.153s 00:05:14.912 sys 0m0.077s 00:05:14.912 12:39:45 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.912 12:39:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:14.912 ************************************ 00:05:14.912 END TEST event_reactor 00:05:14.912 ************************************ 00:05:14.912 12:39:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:14.912 12:39:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.912 12:39:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:14.912 12:39:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.912 12:39:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.912 ************************************ 00:05:14.912 START TEST event_reactor_perf 00:05:14.912 ************************************ 00:05:14.912 12:39:45 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.912 [2024-07-15 12:39:45.493629] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:14.912 [2024-07-15 12:39:45.493697] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537800 ] 00:05:14.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.912 [2024-07-15 12:39:45.566252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.912 [2024-07-15 12:39:45.638596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.880 test_start 00:05:15.880 test_end 00:05:15.880 Performance: 507205 events per second 00:05:15.880 00:05:15.880 real 0m1.234s 00:05:15.880 user 0m1.140s 00:05:15.880 sys 0m0.090s 00:05:15.880 12:39:46 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.880 12:39:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.880 ************************************ 00:05:15.880 END TEST event_reactor_perf 00:05:15.880 ************************************ 00:05:15.880 12:39:46 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.880 12:39:46 event -- event/event.sh@49 -- # uname -s 00:05:15.880 12:39:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:15.880 12:39:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.880 12:39:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.880 12:39:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.880 12:39:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.880 ************************************ 00:05:15.880 START TEST event_scheduler 00:05:15.880 ************************************ 00:05:15.880 12:39:46 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.139 * Looking for test storage... 00:05:16.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:16.139 12:39:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.139 12:39:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1538090 00:05:16.139 12:39:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.139 12:39:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.139 12:39:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1538090 00:05:16.139 12:39:46 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1538090 ']' 00:05:16.139 12:39:46 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.139 12:39:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.139 12:39:46 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.139 12:39:46 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.139 12:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.139 [2024-07-15 12:39:46.914693] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:16.139 [2024-07-15 12:39:46.914746] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538090 ] 00:05:16.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.139 [2024-07-15 12:39:46.969254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.139 [2024-07-15 12:39:47.052498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.139 [2024-07-15 12:39:47.052606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.139 [2024-07-15 12:39:47.052734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.139 [2024-07-15 12:39:47.052734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:17.076 12:39:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 [2024-07-15 12:39:47.739068] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:17.076 [2024-07-15 12:39:47.739084] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:17.076 [2024-07-15 12:39:47.739093] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:17.076 [2024-07-15 12:39:47.739098] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:17.076 [2024-07-15 12:39:47.739103] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 [2024-07-15 12:39:47.810715] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 ************************************ 00:05:17.076 START TEST scheduler_create_thread 00:05:17.076 ************************************ 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 2 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 3 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 4 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 5 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.076 6 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.076 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.077 7 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.077 8 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.077 9 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.077 10 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.077 12:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.453 12:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.453 12:39:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.453 12:39:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.453 12:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.453 12:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.826 12:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.826 00:05:19.826 real 0m2.620s 00:05:19.826 user 0m0.021s 00:05:19.826 sys 0m0.007s 00:05:19.826 12:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.826 12:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.826 ************************************ 00:05:19.826 END TEST scheduler_create_thread 00:05:19.826 ************************************ 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:19.826 12:39:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:19.826 12:39:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1538090 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1538090 ']' 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1538090 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538090 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538090' 00:05:19.826 killing process with pid 1538090 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1538090 00:05:19.826 12:39:50 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1538090 00:05:20.084 [2024-07-15 12:39:50.945052] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.343 00:05:20.343 real 0m4.361s 00:05:20.343 user 0m8.271s 00:05:20.343 sys 0m0.373s 00:05:20.343 12:39:51 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.343 12:39:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.343 ************************************ 00:05:20.343 END TEST event_scheduler 00:05:20.343 ************************************ 00:05:20.343 12:39:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:20.343 12:39:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.343 12:39:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.343 12:39:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.343 12:39:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.343 12:39:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.343 ************************************ 00:05:20.343 START TEST app_repeat 00:05:20.343 ************************************ 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1538913 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1538913' 00:05:20.343 Process app_repeat pid: 1538913 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.343 spdk_app_start Round 0 00:05:20.343 12:39:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1538913 /var/tmp/spdk-nbd.sock 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1538913 ']' 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.343 12:39:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.343 [2024-07-15 12:39:51.252818] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:20.343 [2024-07-15 12:39:51.252871] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538913 ] 00:05:20.343 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.600 [2024-07-15 12:39:51.322160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.600 [2024-07-15 12:39:51.396678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.600 [2024-07-15 12:39:51.396680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.165 12:39:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.165 12:39:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:21.165 12:39:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.422 Malloc0 00:05:21.422 12:39:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.681 Malloc1 00:05:21.681 12:39:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.681 12:39:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.941 /dev/nbd0 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.941 1+0 records in 00:05:21.941 1+0 records out 00:05:21.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230223 s, 17.8 MB/s 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.941 /dev/nbd1 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.941 1+0 records in 00:05:21.941 1+0 records out 00:05:21.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203786 s, 20.1 MB/s 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.941 12:39:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.941 12:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.200 { 00:05:22.200 "nbd_device": "/dev/nbd0", 00:05:22.200 "bdev_name": "Malloc0" 00:05:22.200 }, 00:05:22.200 { 00:05:22.200 "nbd_device": "/dev/nbd1", 00:05:22.200 "bdev_name": "Malloc1" 00:05:22.200 } 00:05:22.200 ]' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.200 { 00:05:22.200 "nbd_device": "/dev/nbd0", 00:05:22.200 "bdev_name": "Malloc0" 00:05:22.200 }, 00:05:22.200 { 00:05:22.200 "nbd_device": "/dev/nbd1", 00:05:22.200 "bdev_name": "Malloc1" 00:05:22.200 } 00:05:22.200 ]' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.200 /dev/nbd1' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.200 /dev/nbd1' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.200 256+0 records in 00:05:22.200 256+0 records out 00:05:22.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974233 s, 108 MB/s 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.200 256+0 records in 00:05:22.200 256+0 records out 00:05:22.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144053 s, 72.8 MB/s 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.200 12:39:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.458 256+0 records in 00:05:22.458 256+0 records out 00:05:22.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148347 s, 70.7 MB/s 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.459 12:39:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.717 12:39:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.976 12:39:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.976 12:39:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.235 12:39:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.494 [2024-07-15 12:39:54.195222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.494 [2024-07-15 12:39:54.262207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.494 [2024-07-15 12:39:54.262208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.494 [2024-07-15 12:39:54.302915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.494 [2024-07-15 12:39:54.302956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.782 12:39:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.782 12:39:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.782 spdk_app_start Round 1 00:05:26.782 12:39:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1538913 /var/tmp/spdk-nbd.sock 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1538913 ']' 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.782 12:39:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:26.782 12:39:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.782 Malloc0 00:05:26.782 12:39:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.782 Malloc1 00:05:26.782 12:39:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.782 12:39:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.041 /dev/nbd0 00:05:27.041 12:39:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.041 12:39:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.041 12:39:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:27.041 12:39:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:27.041 12:39:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.041 12:39:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.041 12:39:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:27.041 12:39:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.042 1+0 records in 00:05:27.042 1+0 records out 00:05:27.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182039 s, 22.5 MB/s 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.042 /dev/nbd1 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.042 1+0 records in 00:05:27.042 1+0 records out 00:05:27.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000118683 s, 34.5 MB/s 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.042 12:39:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.042 12:39:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.301 { 00:05:27.301 "nbd_device": "/dev/nbd0", 00:05:27.301 "bdev_name": "Malloc0" 00:05:27.301 }, 00:05:27.301 { 00:05:27.301 "nbd_device": "/dev/nbd1", 00:05:27.301 "bdev_name": "Malloc1" 00:05:27.301 } 00:05:27.301 ]' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.301 { 00:05:27.301 "nbd_device": "/dev/nbd0", 00:05:27.301 "bdev_name": "Malloc0" 00:05:27.301 }, 00:05:27.301 { 00:05:27.301 "nbd_device": "/dev/nbd1", 00:05:27.301 "bdev_name": "Malloc1" 00:05:27.301 } 00:05:27.301 ]' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.301 /dev/nbd1' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.301 /dev/nbd1' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.301 256+0 records in 00:05:27.301 256+0 records out 00:05:27.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103238 s, 102 MB/s 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.301 256+0 records in 00:05:27.301 256+0 records out 00:05:27.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140083 s, 74.9 MB/s 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.301 12:39:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.560 256+0 records in 00:05:27.560 256+0 records out 00:05:27.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014704 s, 71.3 MB/s 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.560 12:39:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.819 12:39:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.079 12:39:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.079 12:39:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.338 12:39:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.338 [2024-07-15 12:39:59.271098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.597 [2024-07-15 12:39:59.345925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.597 [2024-07-15 12:39:59.345926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.597 [2024-07-15 12:39:59.387595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.597 [2024-07-15 12:39:59.387636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.886 12:40:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.886 12:40:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:31.886 spdk_app_start Round 2 00:05:31.886 12:40:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1538913 /var/tmp/spdk-nbd.sock 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1538913 ']' 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.886 12:40:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:31.886 12:40:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.886 Malloc0 00:05:31.886 12:40:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.886 Malloc1 00:05:31.886 12:40:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.886 12:40:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.886 /dev/nbd0 00:05:32.145 12:40:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.145 12:40:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.145 1+0 records in 00:05:32.145 1+0 records out 00:05:32.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184927 s, 22.1 MB/s 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.145 12:40:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.145 12:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.145 12:40:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.145 12:40:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.145 /dev/nbd1 00:05:32.145 12:40:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.145 12:40:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.145 12:40:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.146 12:40:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.146 1+0 records in 00:05:32.146 1+0 records out 00:05:32.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181967 s, 22.5 MB/s 00:05:32.146 12:40:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.146 12:40:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.146 12:40:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.146 12:40:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.146 12:40:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.146 12:40:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.146 12:40:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.146 12:40:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.146 12:40:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.146 12:40:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.404 12:40:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.405 { 00:05:32.405 "nbd_device": "/dev/nbd0", 00:05:32.405 "bdev_name": "Malloc0" 00:05:32.405 }, 00:05:32.405 { 00:05:32.405 "nbd_device": "/dev/nbd1", 00:05:32.405 "bdev_name": "Malloc1" 00:05:32.405 } 00:05:32.405 ]' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.405 { 00:05:32.405 "nbd_device": "/dev/nbd0", 00:05:32.405 "bdev_name": "Malloc0" 00:05:32.405 }, 00:05:32.405 { 00:05:32.405 "nbd_device": "/dev/nbd1", 00:05:32.405 "bdev_name": "Malloc1" 00:05:32.405 } 00:05:32.405 ]' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.405 /dev/nbd1' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.405 /dev/nbd1' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.405 256+0 records in 00:05:32.405 256+0 records out 00:05:32.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102906 s, 102 MB/s 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.405 256+0 records in 00:05:32.405 256+0 records out 00:05:32.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140548 s, 74.6 MB/s 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.405 12:40:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.663 256+0 records in 00:05:32.663 256+0 records out 00:05:32.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147218 s, 71.2 MB/s 00:05:32.663 12:40:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.663 12:40:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.663 12:40:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.663 12:40:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.663 12:40:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.664 12:40:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.922 12:40:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.181 12:40:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.181 12:40:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.442 12:40:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.442 [2024-07-15 12:40:04.369432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.701 [2024-07-15 12:40:04.438428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.701 [2024-07-15 12:40:04.438428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.701 [2024-07-15 12:40:04.479113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.701 [2024-07-15 12:40:04.479155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.990 12:40:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1538913 /var/tmp/spdk-nbd.sock 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1538913 ']' 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:36.990 12:40:07 event.app_repeat -- event/event.sh@39 -- # killprocess 1538913 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1538913 ']' 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1538913 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538913 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538913' 00:05:36.990 killing process with pid 1538913 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1538913 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1538913 00:05:36.990 spdk_app_start is called in Round 0. 00:05:36.990 Shutdown signal received, stop current app iteration 00:05:36.990 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:36.990 spdk_app_start is called in Round 1. 00:05:36.990 Shutdown signal received, stop current app iteration 00:05:36.990 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:36.990 spdk_app_start is called in Round 2. 00:05:36.990 Shutdown signal received, stop current app iteration 00:05:36.990 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:05:36.990 spdk_app_start is called in Round 3. 00:05:36.990 Shutdown signal received, stop current app iteration 00:05:36.990 12:40:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:36.990 12:40:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:36.990 00:05:36.990 real 0m16.381s 00:05:36.990 user 0m35.520s 00:05:36.990 sys 0m2.417s 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.990 12:40:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.990 ************************************ 00:05:36.990 END TEST app_repeat 00:05:36.990 ************************************ 00:05:36.990 12:40:07 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.990 12:40:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:36.990 12:40:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.990 12:40:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.990 12:40:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.990 12:40:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.990 ************************************ 00:05:36.990 START TEST cpu_locks 00:05:36.990 ************************************ 00:05:36.990 12:40:07 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.991 * Looking for test storage... 00:05:36.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.991 12:40:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:36.991 12:40:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:36.991 12:40:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:36.991 12:40:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:36.991 12:40:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.991 12:40:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.991 12:40:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.991 ************************************ 00:05:36.991 START TEST default_locks 00:05:36.991 ************************************ 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1541911 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1541911 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1541911 ']' 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.991 12:40:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.991 [2024-07-15 12:40:07.839707] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:36.991 [2024-07-15 12:40:07.839747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541911 ] 00:05:36.991 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.991 [2024-07-15 12:40:07.907251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.250 [2024-07-15 12:40:07.986606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1541911 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1541911 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.818 lslocks: write error 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1541911 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1541911 ']' 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1541911 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.818 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1541911 00:05:38.077 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.077 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.077 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1541911' 00:05:38.077 killing process with pid 1541911 00:05:38.077 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1541911 00:05:38.077 12:40:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1541911 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1541911 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1541911 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1541911 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1541911 ']' 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1541911) - No such process 00:05:38.336 ERROR: process (pid: 1541911) is no longer running 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.336 00:05:38.336 real 0m1.331s 00:05:38.336 user 0m1.391s 00:05:38.336 sys 0m0.420s 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.336 12:40:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.336 ************************************ 00:05:38.336 END TEST default_locks 00:05:38.336 ************************************ 00:05:38.336 12:40:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.336 12:40:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.336 12:40:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.336 12:40:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.336 12:40:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.336 ************************************ 00:05:38.336 START TEST default_locks_via_rpc 00:05:38.336 ************************************ 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1542171 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1542171 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1542171 ']' 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.336 12:40:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.336 [2024-07-15 12:40:09.241035] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:38.336 [2024-07-15 12:40:09.241075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542171 ] 00:05:38.336 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.596 [2024-07-15 12:40:09.306236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.596 [2024-07-15 12:40:09.385209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1542171 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1542171 00:05:39.162 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1542171 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1542171 ']' 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1542171 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542171 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542171' 00:05:39.730 killing process with pid 1542171 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1542171 00:05:39.730 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1542171 00:05:39.989 00:05:39.989 real 0m1.551s 00:05:39.989 user 0m1.625s 00:05:39.989 sys 0m0.512s 00:05:39.989 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.989 12:40:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.989 ************************************ 00:05:39.989 END TEST default_locks_via_rpc 00:05:39.989 ************************************ 00:05:39.989 12:40:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:39.989 12:40:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.989 12:40:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.989 12:40:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.989 12:40:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.989 ************************************ 00:05:39.989 START TEST non_locking_app_on_locked_coremask 00:05:39.989 ************************************ 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1542433 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1542433 /var/tmp/spdk.sock 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1542433 ']' 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.989 12:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.989 [2024-07-15 12:40:10.842146] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:39.989 [2024-07-15 12:40:10.842180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542433 ] 00:05:39.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.989 [2024-07-15 12:40:10.908218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.262 [2024-07-15 12:40:10.987768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1542640 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1542640 /var/tmp/spdk2.sock 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1542640 ']' 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.829 12:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.829 [2024-07-15 12:40:11.710039] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:40.829 [2024-07-15 12:40:11.710086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542640 ] 00:05:40.829 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.087 [2024-07-15 12:40:11.786963] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.087 [2024-07-15 12:40:11.786990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.087 [2024-07-15 12:40:11.940553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.656 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.656 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.656 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1542433 00:05:41.656 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1542433 00:05:41.656 12:40:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.224 lslocks: write error 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1542433 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1542433 ']' 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1542433 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542433 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542433' 00:05:42.224 killing process with pid 1542433 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1542433 00:05:42.224 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1542433 00:05:43.166 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1542640 00:05:43.166 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1542640 ']' 00:05:43.166 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1542640 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542640 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542640' 00:05:43.167 killing process with pid 1542640 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1542640 00:05:43.167 12:40:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1542640 00:05:43.167 00:05:43.167 real 0m3.302s 00:05:43.167 user 0m3.553s 00:05:43.167 sys 0m0.945s 00:05:43.167 12:40:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.167 12:40:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.167 ************************************ 00:05:43.167 END TEST non_locking_app_on_locked_coremask 00:05:43.167 ************************************ 00:05:43.425 12:40:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.425 12:40:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.425 12:40:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.425 12:40:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.425 12:40:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.425 ************************************ 00:05:43.425 START TEST locking_app_on_unlocked_coremask 00:05:43.425 ************************************ 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1542987 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1542987 /var/tmp/spdk.sock 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1542987 ']' 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.425 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.426 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.426 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.426 12:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.426 [2024-07-15 12:40:14.224289] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:43.426 [2024-07-15 12:40:14.224333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542987 ] 00:05:43.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.426 [2024-07-15 12:40:14.289187] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.426 [2024-07-15 12:40:14.289210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.426 [2024-07-15 12:40:14.368964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1543170 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1543170 /var/tmp/spdk2.sock 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1543170 ']' 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.362 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.362 [2024-07-15 12:40:15.073655] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:44.362 [2024-07-15 12:40:15.073703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543170 ] 00:05:44.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.362 [2024-07-15 12:40:15.146200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.362 [2024-07-15 12:40:15.294727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.929 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.929 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.929 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1543170 00:05:44.929 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1543170 00:05:44.929 12:40:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.497 lslocks: write error 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1542987 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1542987 ']' 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1542987 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542987 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542987' 00:05:45.497 killing process with pid 1542987 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1542987 00:05:45.497 12:40:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1542987 00:05:46.065 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1543170 00:05:46.065 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1543170 ']' 00:05:46.065 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1543170 00:05:46.065 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543170 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543170' 00:05:46.324 killing process with pid 1543170 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1543170 00:05:46.324 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1543170 00:05:46.583 00:05:46.583 real 0m3.196s 00:05:46.583 user 0m3.415s 00:05:46.583 sys 0m0.914s 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.583 ************************************ 00:05:46.583 END TEST locking_app_on_unlocked_coremask 00:05:46.583 ************************************ 00:05:46.583 12:40:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.583 12:40:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.583 12:40:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.583 12:40:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.583 12:40:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.583 ************************************ 00:05:46.583 START TEST locking_app_on_locked_coremask 00:05:46.583 ************************************ 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1543660 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1543660 /var/tmp/spdk.sock 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1543660 ']' 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.583 12:40:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.583 [2024-07-15 12:40:17.473178] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:46.583 [2024-07-15 12:40:17.473214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543660 ] 00:05:46.583 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.842 [2024-07-15 12:40:17.540755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.842 [2024-07-15 12:40:17.620498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1543704 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1543704 /var/tmp/spdk2.sock 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1543704 /var/tmp/spdk2.sock 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1543704 /var/tmp/spdk2.sock 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1543704 ']' 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.410 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.410 [2024-07-15 12:40:18.332034] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:47.410 [2024-07-15 12:40:18.332083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543704 ] 00:05:47.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.669 [2024-07-15 12:40:18.407222] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1543660 has claimed it. 00:05:47.669 [2024-07-15 12:40:18.407264] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1543704) - No such process 00:05:48.235 ERROR: process (pid: 1543704) is no longer running 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.235 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1543660 00:05:48.236 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1543660 00:05:48.236 12:40:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.494 lslocks: write error 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1543660 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1543660 ']' 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1543660 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543660 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543660' 00:05:48.494 killing process with pid 1543660 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1543660 00:05:48.494 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1543660 00:05:49.062 00:05:49.062 real 0m2.310s 00:05:49.062 user 0m2.530s 00:05:49.062 sys 0m0.660s 00:05:49.062 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.062 12:40:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.062 ************************************ 00:05:49.062 END TEST locking_app_on_locked_coremask 00:05:49.062 ************************************ 00:05:49.062 12:40:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.062 12:40:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.062 12:40:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.062 12:40:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.062 12:40:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.062 ************************************ 00:05:49.062 START TEST locking_overlapped_coremask 00:05:49.062 ************************************ 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1544059 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1544059 /var/tmp/spdk.sock 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1544059 ']' 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.062 12:40:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.062 [2024-07-15 12:40:19.858629] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:49.062 [2024-07-15 12:40:19.858669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544059 ] 00:05:49.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.062 [2024-07-15 12:40:19.923211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.062 [2024-07-15 12:40:20.004713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.062 [2024-07-15 12:40:20.004741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.062 [2024-07-15 12:40:20.004742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1544166 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1544166 /var/tmp/spdk2.sock 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1544166 /var/tmp/spdk2.sock 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.035 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1544166 /var/tmp/spdk2.sock 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1544166 ']' 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.036 12:40:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.036 [2024-07-15 12:40:20.720740] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:50.036 [2024-07-15 12:40:20.720790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544166 ] 00:05:50.036 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.036 [2024-07-15 12:40:20.796354] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1544059 has claimed it. 00:05:50.036 [2024-07-15 12:40:20.796389] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1544166) - No such process 00:05:50.602 ERROR: process (pid: 1544166) is no longer running 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1544059 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1544059 ']' 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1544059 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544059 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544059' 00:05:50.602 killing process with pid 1544059 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1544059 00:05:50.602 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1544059 00:05:50.861 00:05:50.861 real 0m1.896s 00:05:50.861 user 0m5.323s 00:05:50.861 sys 0m0.417s 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.861 ************************************ 00:05:50.861 END TEST locking_overlapped_coremask 00:05:50.861 ************************************ 00:05:50.861 12:40:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.861 12:40:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:50.861 12:40:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.861 12:40:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.861 12:40:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.861 ************************************ 00:05:50.861 START TEST locking_overlapped_coremask_via_rpc 00:05:50.861 ************************************ 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1544424 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1544424 /var/tmp/spdk.sock 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1544424 ']' 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.861 12:40:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.120 [2024-07-15 12:40:21.818949] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:51.120 [2024-07-15 12:40:21.818988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544424 ] 00:05:51.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.120 [2024-07-15 12:40:21.883169] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.120 [2024-07-15 12:40:21.883192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.120 [2024-07-15 12:40:21.964327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.120 [2024-07-15 12:40:21.964434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.120 [2024-07-15 12:40:21.964434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1544610 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1544610 /var/tmp/spdk2.sock 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1544610 ']' 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.688 12:40:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.947 [2024-07-15 12:40:22.684558] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:51.947 [2024-07-15 12:40:22.684607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544610 ] 00:05:51.947 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.947 [2024-07-15 12:40:22.761454] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.947 [2024-07-15 12:40:22.761482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.206 [2024-07-15 12:40:22.919063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.206 [2024-07-15 12:40:22.919176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.206 [2024-07-15 12:40:22.919177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.771 [2024-07-15 12:40:23.502296] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1544424 has claimed it. 00:05:52.771 request: 00:05:52.771 { 00:05:52.771 "method": "framework_enable_cpumask_locks", 00:05:52.771 "req_id": 1 00:05:52.771 } 00:05:52.771 Got JSON-RPC error response 00:05:52.771 response: 00:05:52.771 { 00:05:52.771 "code": -32603, 00:05:52.771 "message": "Failed to claim CPU core: 2" 00:05:52.771 } 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1544424 /var/tmp/spdk.sock 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1544424 ']' 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.771 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1544610 /var/tmp/spdk2.sock 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1544610 ']' 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.772 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.030 00:05:53.030 real 0m2.118s 00:05:53.030 user 0m0.864s 00:05:53.030 sys 0m0.186s 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.030 12:40:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.030 ************************************ 00:05:53.030 END TEST locking_overlapped_coremask_via_rpc 00:05:53.030 ************************************ 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:53.030 12:40:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.030 12:40:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1544424 ]] 00:05:53.030 12:40:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1544424 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1544424 ']' 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1544424 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544424 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544424' 00:05:53.030 killing process with pid 1544424 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1544424 00:05:53.030 12:40:23 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1544424 00:05:53.597 12:40:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1544610 ]] 00:05:53.597 12:40:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1544610 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1544610 ']' 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1544610 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544610 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544610' 00:05:53.597 killing process with pid 1544610 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1544610 00:05:53.597 12:40:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1544610 00:05:53.856 12:40:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:53.856 12:40:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:53.856 12:40:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1544424 ]] 00:05:53.856 12:40:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1544424 00:05:53.856 12:40:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1544424 ']' 00:05:53.856 12:40:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1544424 00:05:53.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1544424) - No such process 00:05:53.856 12:40:24 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1544424 is not found' 00:05:53.856 Process with pid 1544424 is not found 00:05:53.856 12:40:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1544610 ]] 00:05:53.856 12:40:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1544610 00:05:53.856 12:40:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1544610 ']' 00:05:53.857 12:40:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1544610 00:05:53.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1544610) - No such process 00:05:53.857 12:40:24 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1544610 is not found' 00:05:53.857 Process with pid 1544610 is not found 00:05:53.857 12:40:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:53.857 00:05:53.857 real 0m16.976s 00:05:53.857 user 0m29.205s 00:05:53.857 sys 0m4.953s 00:05:53.857 12:40:24 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.857 12:40:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.857 ************************************ 00:05:53.857 END TEST cpu_locks 00:05:53.857 ************************************ 00:05:53.857 12:40:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:53.857 00:05:53.857 real 0m41.934s 00:05:53.857 user 1m19.637s 00:05:53.857 sys 0m8.343s 00:05:53.857 12:40:24 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.857 12:40:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.857 ************************************ 00:05:53.857 END TEST event 00:05:53.857 ************************************ 00:05:53.857 12:40:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.857 12:40:24 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:53.857 12:40:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.857 12:40:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.857 12:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.857 ************************************ 00:05:53.857 START TEST thread 00:05:53.857 ************************************ 00:05:53.857 12:40:24 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:54.115 * Looking for test storage... 00:05:54.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:54.115 12:40:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.115 12:40:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:54.115 12:40:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.115 12:40:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.115 ************************************ 00:05:54.115 START TEST thread_poller_perf 00:05:54.115 ************************************ 00:05:54.115 12:40:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.115 [2024-07-15 12:40:24.888956] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:54.115 [2024-07-15 12:40:24.889029] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544994 ] 00:05:54.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.115 [2024-07-15 12:40:24.961426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.115 [2024-07-15 12:40:25.033958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.115 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:55.490 ====================================== 00:05:55.490 busy:2309630350 (cyc) 00:05:55.490 total_run_count: 414000 00:05:55.490 tsc_hz: 2300000000 (cyc) 00:05:55.490 ====================================== 00:05:55.490 poller_cost: 5578 (cyc), 2425 (nsec) 00:05:55.490 00:05:55.490 real 0m1.241s 00:05:55.490 user 0m1.152s 00:05:55.490 sys 0m0.083s 00:05:55.490 12:40:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.490 12:40:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.490 ************************************ 00:05:55.490 END TEST thread_poller_perf 00:05:55.490 ************************************ 00:05:55.490 12:40:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:55.490 12:40:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.490 12:40:26 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:55.490 12:40:26 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.490 12:40:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.490 ************************************ 00:05:55.490 START TEST thread_poller_perf 00:05:55.490 ************************************ 00:05:55.490 12:40:26 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.490 [2024-07-15 12:40:26.195362] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:55.490 [2024-07-15 12:40:26.195433] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545249 ] 00:05:55.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.490 [2024-07-15 12:40:26.264763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.490 [2024-07-15 12:40:26.337032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.490 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.867 ====================================== 00:05:56.867 busy:2301558380 (cyc) 00:05:56.867 total_run_count: 5485000 00:05:56.867 tsc_hz: 2300000000 (cyc) 00:05:56.867 ====================================== 00:05:56.867 poller_cost: 419 (cyc), 182 (nsec) 00:05:56.867 00:05:56.867 real 0m1.230s 00:05:56.867 user 0m1.140s 00:05:56.867 sys 0m0.085s 00:05:56.867 12:40:27 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.867 12:40:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.867 ************************************ 00:05:56.867 END TEST thread_poller_perf 00:05:56.867 ************************************ 00:05:56.867 12:40:27 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:56.867 12:40:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:56.867 00:05:56.867 real 0m2.691s 00:05:56.867 user 0m2.384s 00:05:56.867 sys 0m0.314s 00:05:56.867 12:40:27 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.867 12:40:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.867 ************************************ 00:05:56.867 END TEST thread 00:05:56.867 ************************************ 00:05:56.867 12:40:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.867 12:40:27 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:56.867 12:40:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.867 12:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.867 12:40:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.867 ************************************ 00:05:56.867 START TEST accel 00:05:56.867 ************************************ 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:56.867 * Looking for test storage... 00:05:56.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:56.867 12:40:27 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:56.867 12:40:27 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:56.867 12:40:27 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.867 12:40:27 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1545536 00:05:56.867 12:40:27 accel -- accel/accel.sh@63 -- # waitforlisten 1545536 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@829 -- # '[' -z 1545536 ']' 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.867 12:40:27 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.867 12:40:27 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.867 12:40:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.867 12:40:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.867 12:40:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.867 12:40:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.867 12:40:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.867 12:40:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.867 12:40:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:56.867 12:40:27 accel -- accel/accel.sh@41 -- # jq -r . 00:05:56.867 [2024-07-15 12:40:27.650637] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.867 [2024-07-15 12:40:27.650683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545536 ] 00:05:56.867 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.867 [2024-07-15 12:40:27.720452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.867 [2024-07-15 12:40:27.798748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.837 12:40:28 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.837 12:40:28 accel -- common/autotest_common.sh@862 -- # return 0 00:05:57.837 12:40:28 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:57.837 12:40:28 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:57.837 12:40:28 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:57.837 12:40:28 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:57.837 12:40:28 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:57.837 12:40:28 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:57.837 12:40:28 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.837 12:40:28 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:57.837 12:40:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.837 12:40:28 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.837 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.837 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.837 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.837 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.837 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.837 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.837 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.837 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.837 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.837 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.837 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.837 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # IFS== 00:05:57.838 12:40:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:57.838 12:40:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:57.838 12:40:28 accel -- accel/accel.sh@75 -- # killprocess 1545536 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@948 -- # '[' -z 1545536 ']' 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@952 -- # kill -0 1545536 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@953 -- # uname 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545536 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545536' 00:05:57.838 killing process with pid 1545536 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@967 -- # kill 1545536 00:05:57.838 12:40:28 accel -- common/autotest_common.sh@972 -- # wait 1545536 00:05:58.096 12:40:28 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:58.096 12:40:28 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.096 12:40:28 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:58.096 12:40:28 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:58.096 12:40:28 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.096 12:40:28 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.096 12:40:28 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.096 12:40:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.096 ************************************ 00:05:58.096 START TEST accel_missing_filename 00:05:58.096 ************************************ 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.096 12:40:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:58.096 12:40:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:58.096 [2024-07-15 12:40:29.026533] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:58.096 [2024-07-15 12:40:29.026603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545800 ] 00:05:58.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.353 [2024-07-15 12:40:29.095829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.353 [2024-07-15 12:40:29.172129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.353 [2024-07-15 12:40:29.213011] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.353 [2024-07-15 12:40:29.272993] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:58.612 A filename is required. 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.612 00:05:58.612 real 0m0.349s 00:05:58.612 user 0m0.266s 00:05:58.612 sys 0m0.124s 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.612 12:40:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:58.612 ************************************ 00:05:58.612 END TEST accel_missing_filename 00:05:58.612 ************************************ 00:05:58.612 12:40:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.612 12:40:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.612 12:40:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:58.612 12:40:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.612 12:40:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.612 ************************************ 00:05:58.612 START TEST accel_compress_verify 00:05:58.612 ************************************ 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.612 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:58.612 12:40:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:58.612 [2024-07-15 12:40:29.441550] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:58.612 [2024-07-15 12:40:29.441621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545938 ] 00:05:58.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.612 [2024-07-15 12:40:29.510617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.870 [2024-07-15 12:40:29.589132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.870 [2024-07-15 12:40:29.630426] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.870 [2024-07-15 12:40:29.690466] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:58.870 00:05:58.870 Compression does not support the verify option, aborting. 00:05:58.870 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:58.870 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.870 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:58.870 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:58.870 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:58.871 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.871 00:05:58.871 real 0m0.351s 00:05:58.871 user 0m0.263s 00:05:58.871 sys 0m0.128s 00:05:58.871 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.871 12:40:29 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 ************************************ 00:05:58.871 END TEST accel_compress_verify 00:05:58.871 ************************************ 00:05:58.871 12:40:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.871 12:40:29 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:58.871 12:40:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:58.871 12:40:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.871 12:40:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.129 ************************************ 00:05:59.129 START TEST accel_wrong_workload 00:05:59.129 ************************************ 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:59.129 12:40:29 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:59.129 Unsupported workload type: foobar 00:05:59.129 [2024-07-15 12:40:29.858583] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:59.129 accel_perf options: 00:05:59.129 [-h help message] 00:05:59.129 [-q queue depth per core] 00:05:59.129 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:59.129 [-T number of threads per core 00:05:59.129 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:59.129 [-t time in seconds] 00:05:59.129 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:59.129 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:59.129 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:59.129 [-l for compress/decompress workloads, name of uncompressed input file 00:05:59.129 [-S for crc32c workload, use this seed value (default 0) 00:05:59.129 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:59.129 [-f for fill workload, use this BYTE value (default 255) 00:05:59.129 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:59.129 [-y verify result if this switch is on] 00:05:59.129 [-a tasks to allocate per core (default: same value as -q)] 00:05:59.129 Can be used to spread operations across a wider range of memory. 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.129 00:05:59.129 real 0m0.033s 00:05:59.129 user 0m0.021s 00:05:59.129 sys 0m0.012s 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.129 12:40:29 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:59.129 ************************************ 00:05:59.129 END TEST accel_wrong_workload 00:05:59.129 ************************************ 00:05:59.129 Error: writing output failed: Broken pipe 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.129 12:40:29 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.129 ************************************ 00:05:59.129 START TEST accel_negative_buffers 00:05:59.129 ************************************ 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:59.129 12:40:29 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:59.129 -x option must be non-negative. 00:05:59.129 [2024-07-15 12:40:29.960034] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:59.129 accel_perf options: 00:05:59.129 [-h help message] 00:05:59.129 [-q queue depth per core] 00:05:59.129 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:59.129 [-T number of threads per core 00:05:59.129 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:59.129 [-t time in seconds] 00:05:59.129 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:59.129 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:59.129 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:59.129 [-l for compress/decompress workloads, name of uncompressed input file 00:05:59.129 [-S for crc32c workload, use this seed value (default 0) 00:05:59.129 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:59.129 [-f for fill workload, use this BYTE value (default 255) 00:05:59.129 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:59.129 [-y verify result if this switch is on] 00:05:59.129 [-a tasks to allocate per core (default: same value as -q)] 00:05:59.129 Can be used to spread operations across a wider range of memory. 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.129 00:05:59.129 real 0m0.032s 00:05:59.129 user 0m0.024s 00:05:59.129 sys 0m0.008s 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.129 12:40:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:59.129 ************************************ 00:05:59.129 END TEST accel_negative_buffers 00:05:59.129 ************************************ 00:05:59.129 Error: writing output failed: Broken pipe 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.129 12:40:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.129 12:40:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.129 ************************************ 00:05:59.129 START TEST accel_crc32c 00:05:59.129 ************************************ 00:05:59.129 12:40:30 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:59.129 12:40:30 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:59.129 [2024-07-15 12:40:30.065706] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:59.129 [2024-07-15 12:40:30.065764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546106 ] 00:05:59.389 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.389 [2024-07-15 12:40:30.123935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.389 [2024-07-15 12:40:30.200036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.389 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.390 12:40:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:00.768 12:40:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.768 00:06:00.768 real 0m1.341s 00:06:00.768 user 0m1.237s 00:06:00.768 sys 0m0.116s 00:06:00.768 12:40:31 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.768 12:40:31 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:00.768 ************************************ 00:06:00.768 END TEST accel_crc32c 00:06:00.768 ************************************ 00:06:00.768 12:40:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.768 12:40:31 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:00.768 12:40:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:00.768 12:40:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.768 12:40:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.768 ************************************ 00:06:00.768 START TEST accel_crc32c_C2 00:06:00.768 ************************************ 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:00.768 [2024-07-15 12:40:31.475613] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:00.768 [2024-07-15 12:40:31.475683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546361 ] 00:06:00.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.768 [2024-07-15 12:40:31.543552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.768 [2024-07-15 12:40:31.614611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.768 12:40:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.147 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.148 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:02.148 12:40:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.148 00:06:02.148 real 0m1.347s 00:06:02.148 user 0m1.235s 00:06:02.148 sys 0m0.125s 00:06:02.148 12:40:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.148 12:40:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:02.148 ************************************ 00:06:02.148 END TEST accel_crc32c_C2 00:06:02.148 ************************************ 00:06:02.148 12:40:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.148 12:40:32 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:02.148 12:40:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:02.148 12:40:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.148 12:40:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.148 ************************************ 00:06:02.148 START TEST accel_copy 00:06:02.148 ************************************ 00:06:02.148 12:40:32 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:02.148 12:40:32 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:02.148 [2024-07-15 12:40:32.888773] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:02.148 [2024-07-15 12:40:32.888841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546607 ] 00:06:02.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.148 [2024-07-15 12:40:32.958973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.148 [2024-07-15 12:40:33.031546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.148 12:40:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.525 12:40:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.526 12:40:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:03.526 12:40:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.526 00:06:03.526 real 0m1.351s 00:06:03.526 user 0m1.231s 00:06:03.526 sys 0m0.131s 00:06:03.526 12:40:34 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.526 12:40:34 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:03.526 ************************************ 00:06:03.526 END TEST accel_copy 00:06:03.526 ************************************ 00:06:03.526 12:40:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.526 12:40:34 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:03.526 12:40:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:03.526 12:40:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.526 12:40:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.526 ************************************ 00:06:03.526 START TEST accel_fill 00:06:03.526 ************************************ 00:06:03.526 12:40:34 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:03.526 12:40:34 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:03.526 [2024-07-15 12:40:34.305605] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:03.526 [2024-07-15 12:40:34.305679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546860 ] 00:06:03.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.526 [2024-07-15 12:40:34.373087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.526 [2024-07-15 12:40:34.445516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:03.785 12:40:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:04.721 12:40:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.722 12:40:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:04.722 12:40:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.722 00:06:04.722 real 0m1.347s 00:06:04.722 user 0m1.237s 00:06:04.722 sys 0m0.122s 00:06:04.722 12:40:35 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.722 12:40:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 ************************************ 00:06:04.722 END TEST accel_fill 00:06:04.722 ************************************ 00:06:04.722 12:40:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.722 12:40:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:04.722 12:40:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.722 12:40:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.722 12:40:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.980 ************************************ 00:06:04.980 START TEST accel_copy_crc32c 00:06:04.980 ************************************ 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:04.980 [2024-07-15 12:40:35.718220] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:04.980 [2024-07-15 12:40:35.718291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547105 ] 00:06:04.980 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.980 [2024-07-15 12:40:35.788001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.980 [2024-07-15 12:40:35.860108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.980 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.981 12:40:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.415 00:06:06.415 real 0m1.349s 00:06:06.415 user 0m1.240s 00:06:06.415 sys 0m0.121s 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.415 12:40:37 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:06.415 ************************************ 00:06:06.415 END TEST accel_copy_crc32c 00:06:06.415 ************************************ 00:06:06.415 12:40:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.415 12:40:37 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:06.415 12:40:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:06.415 12:40:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.415 12:40:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.415 ************************************ 00:06:06.415 START TEST accel_copy_crc32c_C2 00:06:06.415 ************************************ 00:06:06.415 12:40:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:06.415 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.415 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:06.415 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:06.416 [2024-07-15 12:40:37.127378] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:06.416 [2024-07-15 12:40:37.127421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547361 ] 00:06:06.416 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.416 [2024-07-15 12:40:37.193762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.416 [2024-07-15 12:40:37.265075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.416 12:40:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.794 00:06:07.794 real 0m1.338s 00:06:07.794 user 0m1.228s 00:06:07.794 sys 0m0.123s 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.794 12:40:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:07.794 ************************************ 00:06:07.794 END TEST accel_copy_crc32c_C2 00:06:07.794 ************************************ 00:06:07.794 12:40:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.794 12:40:38 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:07.794 12:40:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.794 12:40:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.794 12:40:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.794 ************************************ 00:06:07.794 START TEST accel_dualcast 00:06:07.794 ************************************ 00:06:07.794 12:40:38 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:07.794 [2024-07-15 12:40:38.533699] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:07.794 [2024-07-15 12:40:38.533747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547613 ] 00:06:07.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.794 [2024-07-15 12:40:38.599409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.794 [2024-07-15 12:40:38.670686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.794 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.795 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:07.795 12:40:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:07.795 12:40:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:07.795 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:07.795 12:40:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:09.172 12:40:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.172 00:06:09.172 real 0m1.342s 00:06:09.172 user 0m1.235s 00:06:09.172 sys 0m0.119s 00:06:09.172 12:40:39 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.172 12:40:39 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:09.172 ************************************ 00:06:09.172 END TEST accel_dualcast 00:06:09.172 ************************************ 00:06:09.172 12:40:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.172 12:40:39 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:09.172 12:40:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.172 12:40:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.172 12:40:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.172 ************************************ 00:06:09.172 START TEST accel_compare 00:06:09.172 ************************************ 00:06:09.172 12:40:39 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.172 12:40:39 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.173 12:40:39 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.173 12:40:39 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.173 12:40:39 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.173 12:40:39 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:09.173 12:40:39 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:09.173 [2024-07-15 12:40:39.941884] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.173 [2024-07-15 12:40:39.941940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547874 ] 00:06:09.173 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.173 [2024-07-15 12:40:40.014301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.173 [2024-07-15 12:40:40.102953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.432 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:09.432 12:40:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:09.432 12:40:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:09.432 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:09.432 12:40:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:10.368 12:40:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.368 00:06:10.368 real 0m1.367s 00:06:10.368 user 0m1.262s 00:06:10.368 sys 0m0.118s 00:06:10.368 12:40:41 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.368 12:40:41 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:10.368 ************************************ 00:06:10.368 END TEST accel_compare 00:06:10.368 ************************************ 00:06:10.368 12:40:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.368 12:40:41 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:10.368 12:40:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.368 12:40:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.368 12:40:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.628 ************************************ 00:06:10.628 START TEST accel_xor 00:06:10.628 ************************************ 00:06:10.628 12:40:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:10.628 [2024-07-15 12:40:41.373229] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:10.628 [2024-07-15 12:40:41.373277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548128 ] 00:06:10.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.628 [2024-07-15 12:40:41.439517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.628 [2024-07-15 12:40:41.511368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.628 12:40:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.003 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.004 00:06:12.004 real 0m1.343s 00:06:12.004 user 0m1.244s 00:06:12.004 sys 0m0.111s 00:06:12.004 12:40:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.004 12:40:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:12.004 ************************************ 00:06:12.004 END TEST accel_xor 00:06:12.004 ************************************ 00:06:12.004 12:40:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.004 12:40:42 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:12.004 12:40:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:12.004 12:40:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.004 12:40:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.004 ************************************ 00:06:12.004 START TEST accel_xor 00:06:12.004 ************************************ 00:06:12.004 12:40:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:12.004 12:40:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:12.004 [2024-07-15 12:40:42.785677] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:12.004 [2024-07-15 12:40:42.785746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548375 ] 00:06:12.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.004 [2024-07-15 12:40:42.855602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.004 [2024-07-15 12:40:42.927819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.262 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.263 12:40:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:13.197 12:40:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.197 00:06:13.197 real 0m1.352s 00:06:13.197 user 0m1.239s 00:06:13.197 sys 0m0.125s 00:06:13.197 12:40:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.197 12:40:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 ************************************ 00:06:13.197 END TEST accel_xor 00:06:13.197 ************************************ 00:06:13.197 12:40:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.197 12:40:44 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:13.197 12:40:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:13.197 12:40:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.197 12:40:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.455 ************************************ 00:06:13.455 START TEST accel_dif_verify 00:06:13.455 ************************************ 00:06:13.455 12:40:44 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:13.455 [2024-07-15 12:40:44.203086] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.455 [2024-07-15 12:40:44.203153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548620 ] 00:06:13.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.455 [2024-07-15 12:40:44.270847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.455 [2024-07-15 12:40:44.343088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.455 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:13.456 12:40:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:14.832 12:40:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.832 00:06:14.832 real 0m1.349s 00:06:14.832 user 0m1.243s 00:06:14.832 sys 0m0.119s 00:06:14.832 12:40:45 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.832 12:40:45 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 ************************************ 00:06:14.832 END TEST accel_dif_verify 00:06:14.832 ************************************ 00:06:14.832 12:40:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.832 12:40:45 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:14.832 12:40:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:14.832 12:40:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.832 12:40:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.832 ************************************ 00:06:14.832 START TEST accel_dif_generate 00:06:14.832 ************************************ 00:06:14.832 12:40:45 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:14.833 12:40:45 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:14.833 [2024-07-15 12:40:45.618256] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:14.833 [2024-07-15 12:40:45.618305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548875 ] 00:06:14.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.833 [2024-07-15 12:40:45.684720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.833 [2024-07-15 12:40:45.755985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:15.092 12:40:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:16.029 12:40:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.029 00:06:16.029 real 0m1.345s 00:06:16.029 user 0m1.230s 00:06:16.029 sys 0m0.128s 00:06:16.029 12:40:46 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.029 12:40:46 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:16.029 ************************************ 00:06:16.029 END TEST accel_dif_generate 00:06:16.029 ************************************ 00:06:16.029 12:40:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.030 12:40:46 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:16.030 12:40:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:16.030 12:40:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.030 12:40:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.289 ************************************ 00:06:16.289 START TEST accel_dif_generate_copy 00:06:16.289 ************************************ 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:16.289 [2024-07-15 12:40:47.030488] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.289 [2024-07-15 12:40:47.030555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549121 ] 00:06:16.289 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.289 [2024-07-15 12:40:47.099700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.289 [2024-07-15 12:40:47.171408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.289 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.290 12:40:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.668 00:06:17.668 real 0m1.349s 00:06:17.668 user 0m1.240s 00:06:17.668 sys 0m0.121s 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.668 12:40:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:17.668 ************************************ 00:06:17.668 END TEST accel_dif_generate_copy 00:06:17.668 ************************************ 00:06:17.668 12:40:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.668 12:40:48 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:17.668 12:40:48 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.668 12:40:48 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:17.668 12:40:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.668 12:40:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.668 ************************************ 00:06:17.668 START TEST accel_comp 00:06:17.668 ************************************ 00:06:17.668 12:40:48 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.668 12:40:48 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.669 12:40:48 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.669 12:40:48 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.669 12:40:48 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:17.669 12:40:48 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:17.669 [2024-07-15 12:40:48.441770] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:17.669 [2024-07-15 12:40:48.441823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549378 ] 00:06:17.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.669 [2024-07-15 12:40:48.509689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.669 [2024-07-15 12:40:48.581489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:17.929 12:40:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:18.867 12:40:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.867 00:06:18.867 real 0m1.345s 00:06:18.867 user 0m1.236s 00:06:18.867 sys 0m0.122s 00:06:18.867 12:40:49 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.867 12:40:49 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:18.867 ************************************ 00:06:18.867 END TEST accel_comp 00:06:18.867 ************************************ 00:06:18.867 12:40:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.867 12:40:49 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:18.867 12:40:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.867 12:40:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.867 12:40:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.126 ************************************ 00:06:19.126 START TEST accel_decomp 00:06:19.126 ************************************ 00:06:19.126 12:40:49 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.126 12:40:49 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:19.126 12:40:49 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:19.126 12:40:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.126 12:40:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:19.127 12:40:49 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:19.127 [2024-07-15 12:40:49.853298] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.127 [2024-07-15 12:40:49.853346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549626 ] 00:06:19.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.127 [2024-07-15 12:40:49.919850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.127 [2024-07-15 12:40:49.992165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:19.127 12:40:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.506 12:40:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.506 00:06:20.506 real 0m1.347s 00:06:20.506 user 0m1.239s 00:06:20.506 sys 0m0.123s 00:06:20.506 12:40:51 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.506 12:40:51 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:20.506 ************************************ 00:06:20.506 END TEST accel_decomp 00:06:20.506 ************************************ 00:06:20.506 12:40:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.506 12:40:51 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.506 12:40:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:20.506 12:40:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.506 12:40:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.506 ************************************ 00:06:20.506 START TEST accel_decomp_full 00:06:20.506 ************************************ 00:06:20.506 12:40:51 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:20.506 [2024-07-15 12:40:51.266774] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:20.506 [2024-07-15 12:40:51.266823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549874 ] 00:06:20.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.506 [2024-07-15 12:40:51.333488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.506 [2024-07-15 12:40:51.405788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.506 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.507 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:20.766 12:40:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.702 12:40:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.702 00:06:21.702 real 0m1.357s 00:06:21.702 user 0m1.243s 00:06:21.702 sys 0m0.127s 00:06:21.702 12:40:52 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.702 12:40:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:21.702 ************************************ 00:06:21.702 END TEST accel_decomp_full 00:06:21.702 ************************************ 00:06:21.702 12:40:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.702 12:40:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.702 12:40:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:21.702 12:40:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.702 12:40:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 ************************************ 00:06:21.962 START TEST accel_decomp_mcore 00:06:21.962 ************************************ 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:21.962 [2024-07-15 12:40:52.690475] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:21.962 [2024-07-15 12:40:52.690525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550127 ] 00:06:21.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.962 [2024-07-15 12:40:52.757757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.962 [2024-07-15 12:40:52.832619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.962 [2024-07-15 12:40:52.832730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.962 [2024-07-15 12:40:52.832833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.962 [2024-07-15 12:40:52.832834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.962 12:40:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.371 00:06:23.371 real 0m1.360s 00:06:23.371 user 0m4.579s 00:06:23.371 sys 0m0.126s 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.371 12:40:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:23.371 ************************************ 00:06:23.371 END TEST accel_decomp_mcore 00:06:23.371 ************************************ 00:06:23.371 12:40:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.371 12:40:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.371 12:40:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:23.371 12:40:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.371 12:40:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.371 ************************************ 00:06:23.371 START TEST accel_decomp_full_mcore 00:06:23.371 ************************************ 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:23.371 [2024-07-15 12:40:54.115713] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:23.371 [2024-07-15 12:40:54.115764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550387 ] 00:06:23.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.371 [2024-07-15 12:40:54.183750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.371 [2024-07-15 12:40:54.259046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.371 [2024-07-15 12:40:54.259105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.371 [2024-07-15 12:40:54.259075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.371 [2024-07-15 12:40:54.259106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.371 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.372 12:40:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.748 00:06:24.748 real 0m1.374s 00:06:24.748 user 0m4.617s 00:06:24.748 sys 0m0.137s 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.748 12:40:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:24.748 ************************************ 00:06:24.748 END TEST accel_decomp_full_mcore 00:06:24.748 ************************************ 00:06:24.748 12:40:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.748 12:40:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.748 12:40:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:24.748 12:40:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.748 12:40:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.748 ************************************ 00:06:24.748 START TEST accel_decomp_mthread 00:06:24.748 ************************************ 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:24.748 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:24.748 [2024-07-15 12:40:55.559602] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:24.748 [2024-07-15 12:40:55.559652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550669 ] 00:06:24.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.748 [2024-07-15 12:40:55.626943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.748 [2024-07-15 12:40:55.698778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.008 12:40:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.945 00:06:25.945 real 0m1.353s 00:06:25.945 user 0m1.242s 00:06:25.945 sys 0m0.125s 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.945 12:40:56 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:25.945 ************************************ 00:06:25.945 END TEST accel_decomp_mthread 00:06:25.945 ************************************ 00:06:26.204 12:40:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.204 12:40:56 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.204 12:40:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.204 12:40:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.204 12:40:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.204 ************************************ 00:06:26.204 START TEST accel_decomp_full_mthread 00:06:26.204 ************************************ 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:26.204 12:40:56 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:26.204 [2024-07-15 12:40:56.977948] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.204 [2024-07-15 12:40:56.978006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550932 ] 00:06:26.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.204 [2024-07-15 12:40:57.048980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.204 [2024-07-15 12:40:57.122035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.463 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 12:40:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.400 12:40:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.400 00:06:27.400 real 0m1.379s 00:06:27.400 user 0m1.260s 00:06:27.400 sys 0m0.132s 00:06:27.401 12:40:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.401 12:40:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:27.401 ************************************ 00:06:27.401 END TEST accel_decomp_full_mthread 00:06:27.401 ************************************ 00:06:27.659 12:40:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.659 12:40:58 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:27.659 12:40:58 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:27.659 12:40:58 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.660 12:40:58 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:27.660 12:40:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.660 12:40:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.660 12:40:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.660 12:40:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.660 12:40:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.660 12:40:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.660 12:40:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.660 12:40:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:27.660 12:40:58 accel -- accel/accel.sh@41 -- # jq -r . 00:06:27.660 ************************************ 00:06:27.660 START TEST accel_dif_functional_tests 00:06:27.660 ************************************ 00:06:27.660 12:40:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:27.660 [2024-07-15 12:40:58.443958] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:27.660 [2024-07-15 12:40:58.443993] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551218 ] 00:06:27.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.660 [2024-07-15 12:40:58.518061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.660 [2024-07-15 12:40:58.597412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.660 [2024-07-15 12:40:58.597518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.660 [2024-07-15 12:40:58.597519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.918 00:06:27.918 00:06:27.918 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.918 http://cunit.sourceforge.net/ 00:06:27.918 00:06:27.918 00:06:27.918 Suite: accel_dif 00:06:27.918 Test: verify: DIF generated, GUARD check ...passed 00:06:27.918 Test: verify: DIF generated, APPTAG check ...passed 00:06:27.918 Test: verify: DIF generated, REFTAG check ...passed 00:06:27.918 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:40:58.664517] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:27.918 passed 00:06:27.918 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:40:58.664567] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:27.918 passed 00:06:27.918 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:40:58.664586] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:27.918 passed 00:06:27.918 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:27.918 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:40:58.664627] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:27.918 passed 00:06:27.918 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:27.918 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:27.918 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:27.918 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:40:58.664720] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:27.918 passed 00:06:27.918 Test: verify copy: DIF generated, GUARD check ...passed 00:06:27.918 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:27.918 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:27.918 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:40:58.664822] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:27.918 passed 00:06:27.918 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:40:58.664842] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:27.918 passed 00:06:27.919 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:40:58.664861] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:27.919 passed 00:06:27.919 Test: generate copy: DIF generated, GUARD check ...passed 00:06:27.919 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:27.919 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:27.919 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:27.919 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:27.919 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:27.919 Test: generate copy: iovecs-len validate ...[2024-07-15 12:40:58.665023] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:27.919 passed 00:06:27.919 Test: generate copy: buffer alignment validate ...passed 00:06:27.919 00:06:27.919 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.919 suites 1 1 n/a 0 0 00:06:27.919 tests 26 26 26 0 0 00:06:27.919 asserts 115 115 115 0 n/a 00:06:27.919 00:06:27.919 Elapsed time = 0.000 seconds 00:06:27.919 00:06:27.919 real 0m0.434s 00:06:27.919 user 0m0.625s 00:06:27.919 sys 0m0.160s 00:06:27.919 12:40:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.919 12:40:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:27.919 ************************************ 00:06:27.919 END TEST accel_dif_functional_tests 00:06:27.919 ************************************ 00:06:27.919 12:40:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.919 00:06:27.919 real 0m31.358s 00:06:27.919 user 0m34.955s 00:06:27.919 sys 0m4.478s 00:06:27.919 12:40:58 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.919 12:40:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.919 ************************************ 00:06:27.919 END TEST accel 00:06:27.919 ************************************ 00:06:28.177 12:40:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.177 12:40:58 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:28.177 12:40:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.177 12:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.177 12:40:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.177 ************************************ 00:06:28.177 START TEST accel_rpc 00:06:28.177 ************************************ 00:06:28.177 12:40:58 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:28.177 * Looking for test storage... 00:06:28.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:28.177 12:40:59 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.177 12:40:59 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1551415 00:06:28.177 12:40:59 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1551415 00:06:28.177 12:40:59 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:28.177 12:40:59 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1551415 ']' 00:06:28.177 12:40:59 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.177 12:40:59 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.177 12:40:59 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.177 12:40:59 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.177 12:40:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.177 [2024-07-15 12:40:59.080112] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:28.177 [2024-07-15 12:40:59.080176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551415 ] 00:06:28.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.436 [2024-07-15 12:40:59.133341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.436 [2024-07-15 12:40:59.205616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.004 12:40:59 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.004 12:40:59 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.004 12:40:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:29.004 12:40:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:29.004 12:40:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:29.004 12:40:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:29.004 12:40:59 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:29.004 12:40:59 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.004 12:40:59 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.004 12:40:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.004 ************************************ 00:06:29.004 START TEST accel_assign_opcode 00:06:29.004 ************************************ 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.004 [2024-07-15 12:40:59.931733] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.004 [2024-07-15 12:40:59.943761] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.004 12:40:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.263 software 00:06:29.263 00:06:29.263 real 0m0.246s 00:06:29.263 user 0m0.047s 00:06:29.263 sys 0m0.010s 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.263 12:41:00 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:29.263 ************************************ 00:06:29.263 END TEST accel_assign_opcode 00:06:29.263 ************************************ 00:06:29.263 12:41:00 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:29.263 12:41:00 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1551415 00:06:29.263 12:41:00 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1551415 ']' 00:06:29.263 12:41:00 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1551415 00:06:29.263 12:41:00 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:29.263 12:41:00 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.263 12:41:00 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1551415 00:06:29.522 12:41:00 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.522 12:41:00 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.522 12:41:00 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1551415' 00:06:29.522 killing process with pid 1551415 00:06:29.522 12:41:00 accel_rpc -- common/autotest_common.sh@967 -- # kill 1551415 00:06:29.522 12:41:00 accel_rpc -- common/autotest_common.sh@972 -- # wait 1551415 00:06:29.781 00:06:29.781 real 0m1.619s 00:06:29.781 user 0m1.695s 00:06:29.781 sys 0m0.431s 00:06:29.781 12:41:00 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.781 12:41:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.781 ************************************ 00:06:29.781 END TEST accel_rpc 00:06:29.781 ************************************ 00:06:29.781 12:41:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.781 12:41:00 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.781 12:41:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.781 12:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.781 12:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:29.781 ************************************ 00:06:29.781 START TEST app_cmdline 00:06:29.781 ************************************ 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.781 * Looking for test storage... 00:06:29.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:29.781 12:41:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.781 12:41:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1551725 00:06:29.781 12:41:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1551725 00:06:29.781 12:41:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1551725 ']' 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.781 12:41:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.040 [2024-07-15 12:41:00.769463] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:30.040 [2024-07-15 12:41:00.769517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551725 ] 00:06:30.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.040 [2024-07-15 12:41:00.823829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.040 [2024-07-15 12:41:00.898074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:30.978 { 00:06:30.978 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:06:30.978 "fields": { 00:06:30.978 "major": 24, 00:06:30.978 "minor": 9, 00:06:30.978 "patch": 0, 00:06:30.978 "suffix": "-pre", 00:06:30.978 "commit": "2728651ee" 00:06:30.978 } 00:06:30.978 } 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.978 12:41:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:30.978 12:41:01 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.237 request: 00:06:31.237 { 00:06:31.237 "method": "env_dpdk_get_mem_stats", 00:06:31.237 "req_id": 1 00:06:31.237 } 00:06:31.237 Got JSON-RPC error response 00:06:31.237 response: 00:06:31.237 { 00:06:31.237 "code": -32601, 00:06:31.237 "message": "Method not found" 00:06:31.237 } 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.237 12:41:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1551725 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1551725 ']' 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1551725 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.237 12:41:01 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1551725 00:06:31.237 12:41:02 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.237 12:41:02 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.237 12:41:02 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1551725' 00:06:31.237 killing process with pid 1551725 00:06:31.237 12:41:02 app_cmdline -- common/autotest_common.sh@967 -- # kill 1551725 00:06:31.237 12:41:02 app_cmdline -- common/autotest_common.sh@972 -- # wait 1551725 00:06:31.495 00:06:31.495 real 0m1.690s 00:06:31.495 user 0m2.011s 00:06:31.495 sys 0m0.434s 00:06:31.495 12:41:02 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.495 12:41:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.495 ************************************ 00:06:31.495 END TEST app_cmdline 00:06:31.495 ************************************ 00:06:31.495 12:41:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.495 12:41:02 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:31.495 12:41:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.495 12:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.495 12:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.495 ************************************ 00:06:31.495 START TEST version 00:06:31.495 ************************************ 00:06:31.495 12:41:02 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:31.754 * Looking for test storage... 00:06:31.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.754 12:41:02 version -- app/version.sh@17 -- # get_header_version major 00:06:31.754 12:41:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # cut -f2 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.754 12:41:02 version -- app/version.sh@17 -- # major=24 00:06:31.754 12:41:02 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.754 12:41:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # cut -f2 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.754 12:41:02 version -- app/version.sh@18 -- # minor=9 00:06:31.754 12:41:02 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.754 12:41:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # cut -f2 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.754 12:41:02 version -- app/version.sh@19 -- # patch=0 00:06:31.754 12:41:02 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.754 12:41:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # cut -f2 00:06:31.754 12:41:02 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.754 12:41:02 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.754 12:41:02 version -- app/version.sh@22 -- # version=24.9 00:06:31.754 12:41:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.754 12:41:02 version -- app/version.sh@28 -- # version=24.9rc0 00:06:31.754 12:41:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:31.754 12:41:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.754 12:41:02 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:31.754 12:41:02 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:31.754 00:06:31.754 real 0m0.158s 00:06:31.754 user 0m0.094s 00:06:31.754 sys 0m0.100s 00:06:31.754 12:41:02 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.754 12:41:02 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.754 ************************************ 00:06:31.754 END TEST version 00:06:31.754 ************************************ 00:06:31.754 12:41:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.754 12:41:02 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@198 -- # uname -s 00:06:31.754 12:41:02 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:31.754 12:41:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:31.754 12:41:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:31.754 12:41:02 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:31.754 12:41:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.754 12:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.754 12:41:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:31.754 12:41:02 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:31.754 12:41:02 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.754 12:41:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:31.754 12:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.754 12:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:31.754 ************************************ 00:06:31.754 START TEST nvmf_tcp 00:06:31.754 ************************************ 00:06:31.754 12:41:02 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.014 * Looking for test storage... 00:06:32.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.014 12:41:02 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.014 12:41:02 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.014 12:41:02 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.014 12:41:02 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:32.014 12:41:02 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:32.014 12:41:02 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:32.014 12:41:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:32.014 12:41:02 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:32.014 12:41:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:32.014 12:41:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.014 12:41:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.014 ************************************ 00:06:32.014 START TEST nvmf_example 00:06:32.014 ************************************ 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:32.014 * Looking for test storage... 00:06:32.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.014 12:41:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:38.580 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:38.580 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:38.580 Found net devices under 0000:86:00.0: cvl_0_0 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:38.580 Found net devices under 0000:86:00.1: cvl_0_1 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:06:38.580 00:06:38.580 --- 10.0.0.2 ping statistics --- 00:06:38.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.580 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:06:38.580 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:06:38.580 00:06:38.580 --- 10.0.0.1 ping statistics --- 00:06:38.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.581 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1555345 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1555345 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1555345 ']' 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.581 12:41:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:38.839 12:41:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:38.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.098 Initializing NVMe Controllers 00:06:51.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:51.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:51.098 Initialization complete. Launching workers. 00:06:51.098 ======================================================== 00:06:51.098 Latency(us) 00:06:51.098 Device Information : IOPS MiB/s Average min max 00:06:51.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18069.80 70.59 3543.74 608.60 15472.32 00:06:51.098 ======================================================== 00:06:51.098 Total : 18069.80 70.59 3543.74 608.60 15472.32 00:06:51.098 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:51.098 12:41:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:51.098 rmmod nvme_tcp 00:06:51.098 rmmod nvme_fabrics 00:06:51.098 rmmod nvme_keyring 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1555345 ']' 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1555345 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1555345 ']' 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1555345 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555345 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555345' 00:06:51.098 killing process with pid 1555345 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1555345 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1555345 00:06:51.098 nvmf threads initialize successfully 00:06:51.098 bdev subsystem init successfully 00:06:51.098 created a nvmf target service 00:06:51.098 create targets's poll groups done 00:06:51.098 all subsystems of target started 00:06:51.098 nvmf target is running 00:06:51.098 all subsystems of target stopped 00:06:51.098 destroy targets's poll groups done 00:06:51.098 destroyed the nvmf target service 00:06:51.098 bdev subsystem finish successfully 00:06:51.098 nvmf threads destroy successfully 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.098 12:41:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.667 12:41:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:51.667 12:41:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:51.667 12:41:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.667 12:41:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.667 00:06:51.667 real 0m19.559s 00:06:51.667 user 0m45.904s 00:06:51.667 sys 0m5.860s 00:06:51.667 12:41:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.667 12:41:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.667 ************************************ 00:06:51.667 END TEST nvmf_example 00:06:51.667 ************************************ 00:06:51.667 12:41:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:51.667 12:41:22 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:51.667 12:41:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.667 12:41:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.667 12:41:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.667 ************************************ 00:06:51.667 START TEST nvmf_filesystem 00:06:51.667 ************************************ 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:51.667 * Looking for test storage... 00:06:51.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:51.667 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:51.668 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:51.668 #define SPDK_CONFIG_H 00:06:51.668 #define SPDK_CONFIG_APPS 1 00:06:51.668 #define SPDK_CONFIG_ARCH native 00:06:51.668 #undef SPDK_CONFIG_ASAN 00:06:51.668 #undef SPDK_CONFIG_AVAHI 00:06:51.668 #undef SPDK_CONFIG_CET 00:06:51.668 #define SPDK_CONFIG_COVERAGE 1 00:06:51.668 #define SPDK_CONFIG_CROSS_PREFIX 00:06:51.668 #undef SPDK_CONFIG_CRYPTO 00:06:51.668 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:51.668 #undef SPDK_CONFIG_CUSTOMOCF 00:06:51.668 #undef SPDK_CONFIG_DAOS 00:06:51.668 #define SPDK_CONFIG_DAOS_DIR 00:06:51.668 #define SPDK_CONFIG_DEBUG 1 00:06:51.668 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:51.668 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:51.668 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:51.668 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:51.668 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:51.668 #undef SPDK_CONFIG_DPDK_UADK 00:06:51.668 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:51.668 #define SPDK_CONFIG_EXAMPLES 1 00:06:51.668 #undef SPDK_CONFIG_FC 00:06:51.668 #define SPDK_CONFIG_FC_PATH 00:06:51.668 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:51.668 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:51.668 #undef SPDK_CONFIG_FUSE 00:06:51.668 #undef SPDK_CONFIG_FUZZER 00:06:51.668 #define SPDK_CONFIG_FUZZER_LIB 00:06:51.668 #undef SPDK_CONFIG_GOLANG 00:06:51.668 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:51.668 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:51.668 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:51.668 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:51.668 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:51.668 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:51.668 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:51.668 #define SPDK_CONFIG_IDXD 1 00:06:51.668 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:51.668 #undef SPDK_CONFIG_IPSEC_MB 00:06:51.668 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:51.668 #define SPDK_CONFIG_ISAL 1 00:06:51.668 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:51.668 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:51.668 #define SPDK_CONFIG_LIBDIR 00:06:51.668 #undef SPDK_CONFIG_LTO 00:06:51.668 #define SPDK_CONFIG_MAX_LCORES 128 00:06:51.668 #define SPDK_CONFIG_NVME_CUSE 1 00:06:51.668 #undef SPDK_CONFIG_OCF 00:06:51.668 #define SPDK_CONFIG_OCF_PATH 00:06:51.668 #define SPDK_CONFIG_OPENSSL_PATH 00:06:51.668 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:51.668 #define SPDK_CONFIG_PGO_DIR 00:06:51.668 #undef SPDK_CONFIG_PGO_USE 00:06:51.668 #define SPDK_CONFIG_PREFIX /usr/local 00:06:51.668 #undef SPDK_CONFIG_RAID5F 00:06:51.668 #undef SPDK_CONFIG_RBD 00:06:51.668 #define SPDK_CONFIG_RDMA 1 00:06:51.668 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:51.668 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:51.668 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:51.668 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:51.668 #define SPDK_CONFIG_SHARED 1 00:06:51.668 #undef SPDK_CONFIG_SMA 00:06:51.668 #define SPDK_CONFIG_TESTS 1 00:06:51.668 #undef SPDK_CONFIG_TSAN 00:06:51.668 #define SPDK_CONFIG_UBLK 1 00:06:51.668 #define SPDK_CONFIG_UBSAN 1 00:06:51.668 #undef SPDK_CONFIG_UNIT_TESTS 00:06:51.668 #undef SPDK_CONFIG_URING 00:06:51.668 #define SPDK_CONFIG_URING_PATH 00:06:51.668 #undef SPDK_CONFIG_URING_ZNS 00:06:51.668 #undef SPDK_CONFIG_USDT 00:06:51.668 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:51.668 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:51.668 #define SPDK_CONFIG_VFIO_USER 1 00:06:51.668 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:51.669 #define SPDK_CONFIG_VHOST 1 00:06:51.669 #define SPDK_CONFIG_VIRTIO 1 00:06:51.669 #undef SPDK_CONFIG_VTUNE 00:06:51.669 #define SPDK_CONFIG_VTUNE_DIR 00:06:51.669 #define SPDK_CONFIG_WERROR 1 00:06:51.669 #define SPDK_CONFIG_WPDK_DIR 00:06:51.669 #undef SPDK_CONFIG_XNVME 00:06:51.669 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:51.669 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:51.670 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1557755 ]] 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1557755 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.iWuLgD 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.iWuLgD/tests/target /tmp/spdk.iWuLgD 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189518192640 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6456107008 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986527232 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=622592 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:51.930 * Looking for test storage... 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:51.930 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189518192640 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8670699520 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.931 12:41:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:58.501 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:58.501 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:58.501 Found net devices under 0000:86:00.0: cvl_0_0 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:58.501 Found net devices under 0000:86:00.1: cvl_0_1 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:58.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:06:58.501 00:06:58.501 --- 10.0.0.2 ping statistics --- 00:06:58.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.501 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:06:58.501 00:06:58.501 --- 10.0.0.1 ping statistics --- 00:06:58.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.501 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:58.501 12:41:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.502 ************************************ 00:06:58.502 START TEST nvmf_filesystem_no_in_capsule 00:06:58.502 ************************************ 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1560785 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1560785 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1560785 ']' 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.502 12:41:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.502 [2024-07-15 12:41:28.681035] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.502 [2024-07-15 12:41:28.681076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.502 [2024-07-15 12:41:28.753967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.502 [2024-07-15 12:41:28.829013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.502 [2024-07-15 12:41:28.829053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.502 [2024-07-15 12:41:28.829060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.502 [2024-07-15 12:41:28.829066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.502 [2024-07-15 12:41:28.829071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.502 [2024-07-15 12:41:28.829144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.502 [2024-07-15 12:41:28.829303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.502 [2024-07-15 12:41:28.829337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.502 [2024-07-15 12:41:28.829337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 [2024-07-15 12:41:29.529294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 Malloc1 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.761 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.761 [2024-07-15 12:41:29.677015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:58.762 { 00:06:58.762 "name": "Malloc1", 00:06:58.762 "aliases": [ 00:06:58.762 "b4ffb9a0-841e-42f7-ab95-a005c8904f00" 00:06:58.762 ], 00:06:58.762 "product_name": "Malloc disk", 00:06:58.762 "block_size": 512, 00:06:58.762 "num_blocks": 1048576, 00:06:58.762 "uuid": "b4ffb9a0-841e-42f7-ab95-a005c8904f00", 00:06:58.762 "assigned_rate_limits": { 00:06:58.762 "rw_ios_per_sec": 0, 00:06:58.762 "rw_mbytes_per_sec": 0, 00:06:58.762 "r_mbytes_per_sec": 0, 00:06:58.762 "w_mbytes_per_sec": 0 00:06:58.762 }, 00:06:58.762 "claimed": true, 00:06:58.762 "claim_type": "exclusive_write", 00:06:58.762 "zoned": false, 00:06:58.762 "supported_io_types": { 00:06:58.762 "read": true, 00:06:58.762 "write": true, 00:06:58.762 "unmap": true, 00:06:58.762 "flush": true, 00:06:58.762 "reset": true, 00:06:58.762 "nvme_admin": false, 00:06:58.762 "nvme_io": false, 00:06:58.762 "nvme_io_md": false, 00:06:58.762 "write_zeroes": true, 00:06:58.762 "zcopy": true, 00:06:58.762 "get_zone_info": false, 00:06:58.762 "zone_management": false, 00:06:58.762 "zone_append": false, 00:06:58.762 "compare": false, 00:06:58.762 "compare_and_write": false, 00:06:58.762 "abort": true, 00:06:58.762 "seek_hole": false, 00:06:58.762 "seek_data": false, 00:06:58.762 "copy": true, 00:06:58.762 "nvme_iov_md": false 00:06:58.762 }, 00:06:58.762 "memory_domains": [ 00:06:58.762 { 00:06:58.762 "dma_device_id": "system", 00:06:58.762 "dma_device_type": 1 00:06:58.762 }, 00:06:58.762 { 00:06:58.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.762 "dma_device_type": 2 00:06:58.762 } 00:06:58.762 ], 00:06:58.762 "driver_specific": {} 00:06:58.762 } 00:06:58.762 ]' 00:06:58.762 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:59.020 12:41:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.956 12:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.956 12:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:59.956 12:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.956 12:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:59.956 12:41:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:02.489 12:41:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:02.489 12:41:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:02.747 12:41:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.685 ************************************ 00:07:03.685 START TEST filesystem_ext4 00:07:03.685 ************************************ 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:03.685 12:41:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:03.685 mke2fs 1.46.5 (30-Dec-2021) 00:07:03.685 Discarding device blocks: 0/522240 done 00:07:03.944 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:03.944 Filesystem UUID: a6ac3b3a-cee1-4008-b602-7a81de8a3aa9 00:07:03.944 Superblock backups stored on blocks: 00:07:03.944 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:03.944 00:07:03.944 Allocating group tables: 0/64 done 00:07:03.944 Writing inode tables: 0/64 done 00:07:06.481 Creating journal (8192 blocks): done 00:07:06.741 Writing superblocks and filesystem accounting information: 0/64 done 00:07:06.741 00:07:06.741 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:06.741 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.000 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.259 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:07.259 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.259 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:07.259 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:07.259 12:41:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1560785 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.259 00:07:07.259 real 0m3.516s 00:07:07.259 user 0m0.025s 00:07:07.259 sys 0m0.066s 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.259 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:07.259 ************************************ 00:07:07.259 END TEST filesystem_ext4 00:07:07.259 ************************************ 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.260 ************************************ 00:07:07.260 START TEST filesystem_btrfs 00:07:07.260 ************************************ 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.260 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.519 btrfs-progs v6.6.2 00:07:07.519 See https://btrfs.readthedocs.io for more information. 00:07:07.519 00:07:07.519 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.519 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.519 this does not affect your deployments: 00:07:07.519 - DUP for metadata (-m dup) 00:07:07.519 - enabled no-holes (-O no-holes) 00:07:07.519 - enabled free-space-tree (-R free-space-tree) 00:07:07.519 00:07:07.519 Label: (null) 00:07:07.519 UUID: 542b823f-c5d2-4559-8de7-57c4403e18c1 00:07:07.519 Node size: 16384 00:07:07.519 Sector size: 4096 00:07:07.519 Filesystem size: 510.00MiB 00:07:07.519 Block group profiles: 00:07:07.519 Data: single 8.00MiB 00:07:07.519 Metadata: DUP 32.00MiB 00:07:07.519 System: DUP 8.00MiB 00:07:07.519 SSD detected: yes 00:07:07.519 Zoned device: no 00:07:07.519 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.519 Runtime features: free-space-tree 00:07:07.519 Checksum: crc32c 00:07:07.519 Number of devices: 1 00:07:07.519 Devices: 00:07:07.519 ID SIZE PATH 00:07:07.519 1 510.00MiB /dev/nvme0n1p1 00:07:07.519 00:07:07.519 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:07.519 12:41:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1560785 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.457 00:07:08.457 real 0m1.053s 00:07:08.457 user 0m0.028s 00:07:08.457 sys 0m0.121s 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:08.457 ************************************ 00:07:08.457 END TEST filesystem_btrfs 00:07:08.457 ************************************ 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.457 ************************************ 00:07:08.457 START TEST filesystem_xfs 00:07:08.457 ************************************ 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:08.457 12:41:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:08.457 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:08.457 = sectsz=512 attr=2, projid32bit=1 00:07:08.457 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:08.457 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:08.457 data = bsize=4096 blocks=130560, imaxpct=25 00:07:08.457 = sunit=0 swidth=0 blks 00:07:08.457 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:08.457 log =internal log bsize=4096 blocks=16384, version=2 00:07:08.457 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:08.457 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.409 Discarding blocks...Done. 00:07:09.409 12:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:09.409 12:41:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1560785 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.973 00:07:11.973 real 0m3.433s 00:07:11.973 user 0m0.027s 00:07:11.973 sys 0m0.068s 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:11.973 ************************************ 00:07:11.973 END TEST filesystem_xfs 00:07:11.973 ************************************ 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1560785 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1560785 ']' 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1560785 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1560785 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1560785' 00:07:11.973 killing process with pid 1560785 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1560785 00:07:11.973 12:41:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1560785 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:12.549 00:07:12.549 real 0m14.596s 00:07:12.549 user 0m57.449s 00:07:12.549 sys 0m1.199s 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 ************************************ 00:07:12.549 END TEST nvmf_filesystem_no_in_capsule 00:07:12.549 ************************************ 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 ************************************ 00:07:12.549 START TEST nvmf_filesystem_in_capsule 00:07:12.549 ************************************ 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1563535 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1563535 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1563535 ']' 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.549 12:41:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.549 [2024-07-15 12:41:43.355787] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:12.549 [2024-07-15 12:41:43.355832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.549 [2024-07-15 12:41:43.425532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.811 [2024-07-15 12:41:43.505896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.811 [2024-07-15 12:41:43.505932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.811 [2024-07-15 12:41:43.505940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.811 [2024-07-15 12:41:43.505946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.811 [2024-07-15 12:41:43.505951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.811 [2024-07-15 12:41:43.506012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.811 [2024-07-15 12:41:43.506119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.811 [2024-07-15 12:41:43.506242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.811 [2024-07-15 12:41:43.506229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.379 [2024-07-15 12:41:44.213028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.379 Malloc1 00:07:13.379 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.638 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.638 [2024-07-15 12:41:44.357190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:13.639 { 00:07:13.639 "name": "Malloc1", 00:07:13.639 "aliases": [ 00:07:13.639 "cc78fc4b-558c-453b-bc9c-6b1c0c6ebbb6" 00:07:13.639 ], 00:07:13.639 "product_name": "Malloc disk", 00:07:13.639 "block_size": 512, 00:07:13.639 "num_blocks": 1048576, 00:07:13.639 "uuid": "cc78fc4b-558c-453b-bc9c-6b1c0c6ebbb6", 00:07:13.639 "assigned_rate_limits": { 00:07:13.639 "rw_ios_per_sec": 0, 00:07:13.639 "rw_mbytes_per_sec": 0, 00:07:13.639 "r_mbytes_per_sec": 0, 00:07:13.639 "w_mbytes_per_sec": 0 00:07:13.639 }, 00:07:13.639 "claimed": true, 00:07:13.639 "claim_type": "exclusive_write", 00:07:13.639 "zoned": false, 00:07:13.639 "supported_io_types": { 00:07:13.639 "read": true, 00:07:13.639 "write": true, 00:07:13.639 "unmap": true, 00:07:13.639 "flush": true, 00:07:13.639 "reset": true, 00:07:13.639 "nvme_admin": false, 00:07:13.639 "nvme_io": false, 00:07:13.639 "nvme_io_md": false, 00:07:13.639 "write_zeroes": true, 00:07:13.639 "zcopy": true, 00:07:13.639 "get_zone_info": false, 00:07:13.639 "zone_management": false, 00:07:13.639 "zone_append": false, 00:07:13.639 "compare": false, 00:07:13.639 "compare_and_write": false, 00:07:13.639 "abort": true, 00:07:13.639 "seek_hole": false, 00:07:13.639 "seek_data": false, 00:07:13.639 "copy": true, 00:07:13.639 "nvme_iov_md": false 00:07:13.639 }, 00:07:13.639 "memory_domains": [ 00:07:13.639 { 00:07:13.639 "dma_device_id": "system", 00:07:13.639 "dma_device_type": 1 00:07:13.639 }, 00:07:13.639 { 00:07:13.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.639 "dma_device_type": 2 00:07:13.639 } 00:07:13.639 ], 00:07:13.639 "driver_specific": {} 00:07:13.639 } 00:07:13.639 ]' 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:13.639 12:41:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.014 12:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.014 12:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:15.014 12:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.014 12:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:15.014 12:41:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:16.915 12:41:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.292 ************************************ 00:07:18.292 START TEST filesystem_in_capsule_ext4 00:07:18.292 ************************************ 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:18.292 12:41:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:18.292 mke2fs 1.46.5 (30-Dec-2021) 00:07:18.292 Discarding device blocks: 0/522240 done 00:07:18.292 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:18.292 Filesystem UUID: 69a87962-2a92-4c1f-828f-56d895c9adae 00:07:18.292 Superblock backups stored on blocks: 00:07:18.292 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:18.292 00:07:18.292 Allocating group tables: 0/64 done 00:07:18.292 Writing inode tables: 0/64 done 00:07:19.668 Creating journal (8192 blocks): done 00:07:20.496 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:07:20.496 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1563535 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.496 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.756 00:07:20.756 real 0m2.561s 00:07:20.756 user 0m0.030s 00:07:20.756 sys 0m0.064s 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:20.756 ************************************ 00:07:20.756 END TEST filesystem_in_capsule_ext4 00:07:20.756 ************************************ 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.756 ************************************ 00:07:20.756 START TEST filesystem_in_capsule_btrfs 00:07:20.756 ************************************ 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:20.756 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:21.015 btrfs-progs v6.6.2 00:07:21.015 See https://btrfs.readthedocs.io for more information. 00:07:21.015 00:07:21.015 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:21.015 NOTE: several default settings have changed in version 5.15, please make sure 00:07:21.015 this does not affect your deployments: 00:07:21.015 - DUP for metadata (-m dup) 00:07:21.015 - enabled no-holes (-O no-holes) 00:07:21.015 - enabled free-space-tree (-R free-space-tree) 00:07:21.015 00:07:21.015 Label: (null) 00:07:21.015 UUID: 042e40f8-10f7-4523-a01a-27d0aa6b875e 00:07:21.015 Node size: 16384 00:07:21.015 Sector size: 4096 00:07:21.015 Filesystem size: 510.00MiB 00:07:21.015 Block group profiles: 00:07:21.015 Data: single 8.00MiB 00:07:21.015 Metadata: DUP 32.00MiB 00:07:21.015 System: DUP 8.00MiB 00:07:21.015 SSD detected: yes 00:07:21.015 Zoned device: no 00:07:21.015 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:21.015 Runtime features: free-space-tree 00:07:21.015 Checksum: crc32c 00:07:21.015 Number of devices: 1 00:07:21.015 Devices: 00:07:21.015 ID SIZE PATH 00:07:21.015 1 510.00MiB /dev/nvme0n1p1 00:07:21.015 00:07:21.015 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:21.015 12:41:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1563535 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.274 00:07:21.274 real 0m0.680s 00:07:21.274 user 0m0.032s 00:07:21.274 sys 0m0.117s 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.274 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:21.274 ************************************ 00:07:21.274 END TEST filesystem_in_capsule_btrfs 00:07:21.274 ************************************ 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.533 ************************************ 00:07:21.533 START TEST filesystem_in_capsule_xfs 00:07:21.533 ************************************ 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:21.533 12:41:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:21.533 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:21.533 = sectsz=512 attr=2, projid32bit=1 00:07:21.533 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:21.533 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:21.533 data = bsize=4096 blocks=130560, imaxpct=25 00:07:21.533 = sunit=0 swidth=0 blks 00:07:21.533 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:21.533 log =internal log bsize=4096 blocks=16384, version=2 00:07:21.533 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:21.533 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:22.467 Discarding blocks...Done. 00:07:22.467 12:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:22.467 12:41:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1563535 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.001 00:07:25.001 real 0m3.377s 00:07:25.001 user 0m0.017s 00:07:25.001 sys 0m0.080s 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:25.001 ************************************ 00:07:25.001 END TEST filesystem_in_capsule_xfs 00:07:25.001 ************************************ 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:25.001 12:41:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1563535 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1563535 ']' 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1563535 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1563535 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1563535' 00:07:25.260 killing process with pid 1563535 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1563535 00:07:25.260 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1563535 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:25.829 00:07:25.829 real 0m13.226s 00:07:25.829 user 0m51.902s 00:07:25.829 sys 0m1.293s 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.829 ************************************ 00:07:25.829 END TEST nvmf_filesystem_in_capsule 00:07:25.829 ************************************ 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.829 rmmod nvme_tcp 00:07:25.829 rmmod nvme_fabrics 00:07:25.829 rmmod nvme_keyring 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.829 12:41:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.736 12:41:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:27.736 00:07:27.736 real 0m36.254s 00:07:27.736 user 1m51.186s 00:07:27.736 sys 0m7.092s 00:07:27.736 12:41:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.736 12:41:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.736 ************************************ 00:07:27.736 END TEST nvmf_filesystem 00:07:27.736 ************************************ 00:07:27.996 12:41:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.996 12:41:58 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:27.996 12:41:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.996 12:41:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.996 12:41:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 ************************************ 00:07:27.996 START TEST nvmf_target_discovery 00:07:27.996 ************************************ 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:27.996 * Looking for test storage... 00:07:27.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.996 12:41:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:27.997 12:41:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.602 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.602 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:07:34.602 00:07:34.602 --- 10.0.0.2 ping statistics --- 00:07:34.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.602 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:34.602 00:07:34.602 --- 10.0.0.1 ping statistics --- 00:07:34.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.602 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1569474 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1569474 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1569474 ']' 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.602 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.603 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.603 12:42:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.603 [2024-07-15 12:42:04.682193] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:34.603 [2024-07-15 12:42:04.682243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.603 [2024-07-15 12:42:04.754468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.603 [2024-07-15 12:42:04.834767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.603 [2024-07-15 12:42:04.834801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.603 [2024-07-15 12:42:04.834808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.603 [2024-07-15 12:42:04.834815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.603 [2024-07-15 12:42:04.834820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.603 [2024-07-15 12:42:04.834890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.603 [2024-07-15 12:42:04.834994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.603 [2024-07-15 12:42:04.835097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.603 [2024-07-15 12:42:04.835098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.603 [2024-07-15 12:42:05.547214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.603 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 Null1 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 [2024-07-15 12:42:05.592711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 Null2 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 Null3 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 Null4 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.862 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:35.121 00:07:35.121 Discovery Log Number of Records 6, Generation counter 6 00:07:35.121 =====Discovery Log Entry 0====== 00:07:35.121 trtype: tcp 00:07:35.121 adrfam: ipv4 00:07:35.121 subtype: current discovery subsystem 00:07:35.121 treq: not required 00:07:35.121 portid: 0 00:07:35.121 trsvcid: 4420 00:07:35.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:35.121 traddr: 10.0.0.2 00:07:35.121 eflags: explicit discovery connections, duplicate discovery information 00:07:35.121 sectype: none 00:07:35.121 =====Discovery Log Entry 1====== 00:07:35.121 trtype: tcp 00:07:35.121 adrfam: ipv4 00:07:35.121 subtype: nvme subsystem 00:07:35.121 treq: not required 00:07:35.121 portid: 0 00:07:35.121 trsvcid: 4420 00:07:35.121 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:35.121 traddr: 10.0.0.2 00:07:35.121 eflags: none 00:07:35.121 sectype: none 00:07:35.121 =====Discovery Log Entry 2====== 00:07:35.121 trtype: tcp 00:07:35.121 adrfam: ipv4 00:07:35.121 subtype: nvme subsystem 00:07:35.121 treq: not required 00:07:35.121 portid: 0 00:07:35.121 trsvcid: 4420 00:07:35.121 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:35.121 traddr: 10.0.0.2 00:07:35.121 eflags: none 00:07:35.121 sectype: none 00:07:35.121 =====Discovery Log Entry 3====== 00:07:35.121 trtype: tcp 00:07:35.121 adrfam: ipv4 00:07:35.121 subtype: nvme subsystem 00:07:35.121 treq: not required 00:07:35.121 portid: 0 00:07:35.121 trsvcid: 4420 00:07:35.121 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:35.121 traddr: 10.0.0.2 00:07:35.121 eflags: none 00:07:35.121 sectype: none 00:07:35.121 =====Discovery Log Entry 4====== 00:07:35.121 trtype: tcp 00:07:35.121 adrfam: ipv4 00:07:35.121 subtype: nvme subsystem 00:07:35.121 treq: not required 00:07:35.121 portid: 0 00:07:35.121 trsvcid: 4420 00:07:35.121 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:35.121 traddr: 10.0.0.2 00:07:35.121 eflags: none 00:07:35.121 sectype: none 00:07:35.121 =====Discovery Log Entry 5====== 00:07:35.121 trtype: tcp 00:07:35.121 adrfam: ipv4 00:07:35.121 subtype: discovery subsystem referral 00:07:35.121 treq: not required 00:07:35.121 portid: 0 00:07:35.121 trsvcid: 4430 00:07:35.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:35.121 traddr: 10.0.0.2 00:07:35.121 eflags: none 00:07:35.121 sectype: none 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:35.122 Perform nvmf subsystem discovery via RPC 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 [ 00:07:35.122 { 00:07:35.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:35.122 "subtype": "Discovery", 00:07:35.122 "listen_addresses": [ 00:07:35.122 { 00:07:35.122 "trtype": "TCP", 00:07:35.122 "adrfam": "IPv4", 00:07:35.122 "traddr": "10.0.0.2", 00:07:35.122 "trsvcid": "4420" 00:07:35.122 } 00:07:35.122 ], 00:07:35.122 "allow_any_host": true, 00:07:35.122 "hosts": [] 00:07:35.122 }, 00:07:35.122 { 00:07:35.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.122 "subtype": "NVMe", 00:07:35.122 "listen_addresses": [ 00:07:35.122 { 00:07:35.122 "trtype": "TCP", 00:07:35.122 "adrfam": "IPv4", 00:07:35.122 "traddr": "10.0.0.2", 00:07:35.122 "trsvcid": "4420" 00:07:35.122 } 00:07:35.122 ], 00:07:35.122 "allow_any_host": true, 00:07:35.122 "hosts": [], 00:07:35.122 "serial_number": "SPDK00000000000001", 00:07:35.122 "model_number": "SPDK bdev Controller", 00:07:35.122 "max_namespaces": 32, 00:07:35.122 "min_cntlid": 1, 00:07:35.122 "max_cntlid": 65519, 00:07:35.122 "namespaces": [ 00:07:35.122 { 00:07:35.122 "nsid": 1, 00:07:35.122 "bdev_name": "Null1", 00:07:35.122 "name": "Null1", 00:07:35.122 "nguid": "AB338A9AE93649AB96C74F14BAECF3F9", 00:07:35.122 "uuid": "ab338a9a-e936-49ab-96c7-4f14baecf3f9" 00:07:35.122 } 00:07:35.122 ] 00:07:35.122 }, 00:07:35.122 { 00:07:35.122 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:35.122 "subtype": "NVMe", 00:07:35.122 "listen_addresses": [ 00:07:35.122 { 00:07:35.122 "trtype": "TCP", 00:07:35.122 "adrfam": "IPv4", 00:07:35.122 "traddr": "10.0.0.2", 00:07:35.122 "trsvcid": "4420" 00:07:35.122 } 00:07:35.122 ], 00:07:35.122 "allow_any_host": true, 00:07:35.122 "hosts": [], 00:07:35.122 "serial_number": "SPDK00000000000002", 00:07:35.122 "model_number": "SPDK bdev Controller", 00:07:35.122 "max_namespaces": 32, 00:07:35.122 "min_cntlid": 1, 00:07:35.122 "max_cntlid": 65519, 00:07:35.122 "namespaces": [ 00:07:35.122 { 00:07:35.122 "nsid": 1, 00:07:35.122 "bdev_name": "Null2", 00:07:35.122 "name": "Null2", 00:07:35.122 "nguid": "72A81DDCB86649859CAE83B5221100B2", 00:07:35.122 "uuid": "72a81ddc-b866-4985-9cae-83b5221100b2" 00:07:35.122 } 00:07:35.122 ] 00:07:35.122 }, 00:07:35.122 { 00:07:35.122 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:35.122 "subtype": "NVMe", 00:07:35.122 "listen_addresses": [ 00:07:35.122 { 00:07:35.122 "trtype": "TCP", 00:07:35.122 "adrfam": "IPv4", 00:07:35.122 "traddr": "10.0.0.2", 00:07:35.122 "trsvcid": "4420" 00:07:35.122 } 00:07:35.122 ], 00:07:35.122 "allow_any_host": true, 00:07:35.122 "hosts": [], 00:07:35.122 "serial_number": "SPDK00000000000003", 00:07:35.122 "model_number": "SPDK bdev Controller", 00:07:35.122 "max_namespaces": 32, 00:07:35.122 "min_cntlid": 1, 00:07:35.122 "max_cntlid": 65519, 00:07:35.122 "namespaces": [ 00:07:35.122 { 00:07:35.122 "nsid": 1, 00:07:35.122 "bdev_name": "Null3", 00:07:35.122 "name": "Null3", 00:07:35.122 "nguid": "12252371999C4C19AE11C36DE6195514", 00:07:35.122 "uuid": "12252371-999c-4c19-ae11-c36de6195514" 00:07:35.122 } 00:07:35.122 ] 00:07:35.122 }, 00:07:35.122 { 00:07:35.122 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:35.122 "subtype": "NVMe", 00:07:35.122 "listen_addresses": [ 00:07:35.122 { 00:07:35.122 "trtype": "TCP", 00:07:35.122 "adrfam": "IPv4", 00:07:35.122 "traddr": "10.0.0.2", 00:07:35.122 "trsvcid": "4420" 00:07:35.122 } 00:07:35.122 ], 00:07:35.122 "allow_any_host": true, 00:07:35.122 "hosts": [], 00:07:35.122 "serial_number": "SPDK00000000000004", 00:07:35.122 "model_number": "SPDK bdev Controller", 00:07:35.122 "max_namespaces": 32, 00:07:35.122 "min_cntlid": 1, 00:07:35.122 "max_cntlid": 65519, 00:07:35.122 "namespaces": [ 00:07:35.122 { 00:07:35.122 "nsid": 1, 00:07:35.122 "bdev_name": "Null4", 00:07:35.122 "name": "Null4", 00:07:35.122 "nguid": "BB2C40CA6AF448E890845A1470CA56A4", 00:07:35.122 "uuid": "bb2c40ca-6af4-48e8-9084-5a1470ca56a4" 00:07:35.122 } 00:07:35.122 ] 00:07:35.122 } 00:07:35.122 ] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:35.122 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:35.381 rmmod nvme_tcp 00:07:35.381 rmmod nvme_fabrics 00:07:35.381 rmmod nvme_keyring 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1569474 ']' 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1569474 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1569474 ']' 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1569474 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1569474 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1569474' 00:07:35.381 killing process with pid 1569474 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1569474 00:07:35.381 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1569474 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.639 12:42:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.541 12:42:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.541 00:07:37.541 real 0m9.654s 00:07:37.541 user 0m7.862s 00:07:37.541 sys 0m4.673s 00:07:37.541 12:42:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.541 12:42:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:37.541 ************************************ 00:07:37.541 END TEST nvmf_target_discovery 00:07:37.541 ************************************ 00:07:37.541 12:42:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.541 12:42:08 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:37.541 12:42:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.541 12:42:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.541 12:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.541 ************************************ 00:07:37.541 START TEST nvmf_referrals 00:07:37.541 ************************************ 00:07:37.541 12:42:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:37.801 * Looking for test storage... 00:07:37.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.801 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.802 12:42:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:44.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:44.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.375 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:44.376 Found net devices under 0000:86:00.0: cvl_0_0 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:44.376 Found net devices under 0000:86:00.1: cvl_0_1 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:44.376 00:07:44.376 --- 10.0.0.2 ping statistics --- 00:07:44.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.376 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:44.376 00:07:44.376 --- 10.0.0.1 ping statistics --- 00:07:44.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.376 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1573661 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1573661 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1573661 ']' 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.376 12:42:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 [2024-07-15 12:42:14.437351] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:44.376 [2024-07-15 12:42:14.437392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.376 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.376 [2024-07-15 12:42:14.504423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.376 [2024-07-15 12:42:14.584351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.376 [2024-07-15 12:42:14.584386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.376 [2024-07-15 12:42:14.584393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.376 [2024-07-15 12:42:14.584400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.376 [2024-07-15 12:42:14.584405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.376 [2024-07-15 12:42:14.584453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.376 [2024-07-15 12:42:14.584563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.376 [2024-07-15 12:42:14.584669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.376 [2024-07-15 12:42:14.584670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 [2024-07-15 12:42:15.300214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 [2024-07-15 12:42:15.313599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.376 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.636 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.637 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:44.895 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:44.896 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.153 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:45.154 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:45.154 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:45.154 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:45.154 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:45.154 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.154 12:42:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.413 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:45.672 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:45.673 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:45.673 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:45.673 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:45.673 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.932 rmmod nvme_tcp 00:07:45.932 rmmod nvme_fabrics 00:07:45.932 rmmod nvme_keyring 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1573661 ']' 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1573661 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1573661 ']' 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1573661 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1573661 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1573661' 00:07:45.932 killing process with pid 1573661 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1573661 00:07:45.932 12:42:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1573661 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.191 12:42:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.731 12:42:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.731 00:07:48.731 real 0m10.618s 00:07:48.731 user 0m12.200s 00:07:48.731 sys 0m5.093s 00:07:48.731 12:42:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.731 12:42:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.731 ************************************ 00:07:48.731 END TEST nvmf_referrals 00:07:48.731 ************************************ 00:07:48.731 12:42:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.731 12:42:19 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.731 12:42:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.731 12:42:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.731 12:42:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.731 ************************************ 00:07:48.731 START TEST nvmf_connect_disconnect 00:07:48.731 ************************************ 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.731 * Looking for test storage... 00:07:48.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.731 12:42:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.008 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:54.009 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:54.009 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:54.009 Found net devices under 0000:86:00.0: cvl_0_0 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:54.009 Found net devices under 0000:86:00.1: cvl_0_1 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.009 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.268 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.268 12:42:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.268 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.268 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.268 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:07:54.268 00:07:54.268 --- 10.0.0.2 ping statistics --- 00:07:54.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.268 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:54.268 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:07:54.268 00:07:54.268 --- 10.0.0.1 ping statistics --- 00:07:54.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.269 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1577632 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1577632 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1577632 ']' 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.269 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:54.269 [2024-07-15 12:42:25.161431] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:54.269 [2024-07-15 12:42:25.161475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.528 [2024-07-15 12:42:25.235473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.528 [2024-07-15 12:42:25.309509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.528 [2024-07-15 12:42:25.309549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.528 [2024-07-15 12:42:25.309555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.528 [2024-07-15 12:42:25.309561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.528 [2024-07-15 12:42:25.309566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.528 [2024-07-15 12:42:25.309636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.528 [2024-07-15 12:42:25.309772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.528 [2024-07-15 12:42:25.309882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.528 [2024-07-15 12:42:25.309883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.099 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.099 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:55.099 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.099 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.099 12:42:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.099 [2024-07-15 12:42:26.009099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.099 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:55.388 [2024-07-15 12:42:26.061036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:55.388 12:42:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:58.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.837 rmmod nvme_tcp 00:08:11.837 rmmod nvme_fabrics 00:08:11.837 rmmod nvme_keyring 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1577632 ']' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1577632 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1577632 ']' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1577632 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1577632 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1577632' 00:08:11.837 killing process with pid 1577632 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1577632 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1577632 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.837 12:42:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.752 12:42:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.752 00:08:13.752 real 0m25.465s 00:08:13.752 user 1m10.267s 00:08:13.752 sys 0m5.590s 00:08:13.752 12:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.752 12:42:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.752 ************************************ 00:08:13.752 END TEST nvmf_connect_disconnect 00:08:13.752 ************************************ 00:08:13.752 12:42:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:13.752 12:42:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:13.752 12:42:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.752 12:42:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.752 12:42:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.012 ************************************ 00:08:14.012 START TEST nvmf_multitarget 00:08:14.012 ************************************ 00:08:14.012 12:42:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:14.012 * Looking for test storage... 00:08:14.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.012 12:42:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.012 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.013 12:42:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.583 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:20.584 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:20.584 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:20.584 Found net devices under 0000:86:00.0: cvl_0_0 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:20.584 Found net devices under 0000:86:00.1: cvl_0_1 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:20.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:20.584 00:08:20.584 --- 10.0.0.2 ping statistics --- 00:08:20.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.584 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:08:20.584 00:08:20.584 --- 10.0.0.1 ping statistics --- 00:08:20.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.584 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1584141 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1584141 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1584141 ']' 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.584 12:42:50 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.584 [2024-07-15 12:42:50.672289] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:20.584 [2024-07-15 12:42:50.672330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.584 [2024-07-15 12:42:50.744704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.584 [2024-07-15 12:42:50.822890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.584 [2024-07-15 12:42:50.822929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.584 [2024-07-15 12:42:50.822936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.584 [2024-07-15 12:42:50.822941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.584 [2024-07-15 12:42:50.822950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.584 [2024-07-15 12:42:50.823030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.584 [2024-07-15 12:42:50.823140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.584 [2024-07-15 12:42:50.823269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.584 [2024-07-15 12:42:50.823271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.584 12:42:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.584 12:42:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:20.584 12:42:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.584 12:42:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.585 12:42:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.585 12:42:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.585 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:20.585 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:20.585 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:20.843 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:20.844 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:20.844 "nvmf_tgt_1" 00:08:20.844 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:21.141 "nvmf_tgt_2" 00:08:21.141 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:21.141 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:21.141 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:21.141 12:42:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:21.141 true 00:08:21.141 12:42:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:21.404 true 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.404 rmmod nvme_tcp 00:08:21.404 rmmod nvme_fabrics 00:08:21.404 rmmod nvme_keyring 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1584141 ']' 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1584141 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1584141 ']' 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1584141 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1584141 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.404 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1584141' 00:08:21.404 killing process with pid 1584141 00:08:21.405 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1584141 00:08:21.405 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1584141 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.664 12:42:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.200 12:42:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.200 00:08:24.200 real 0m9.883s 00:08:24.200 user 0m9.216s 00:08:24.200 sys 0m4.795s 00:08:24.200 12:42:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.200 12:42:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:24.200 ************************************ 00:08:24.200 END TEST nvmf_multitarget 00:08:24.200 ************************************ 00:08:24.200 12:42:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:24.200 12:42:54 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:24.200 12:42:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.200 12:42:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.200 12:42:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.200 ************************************ 00:08:24.200 START TEST nvmf_rpc 00:08:24.200 ************************************ 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:24.200 * Looking for test storage... 00:08:24.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.200 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.201 12:42:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.475 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.476 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.476 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.476 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:08:29.736 00:08:29.736 --- 10.0.0.2 ping statistics --- 00:08:29.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.736 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:08:29.736 00:08:29.736 --- 10.0.0.1 ping statistics --- 00:08:29.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.736 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1587922 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1587922 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1587922 ']' 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.736 12:43:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.736 [2024-07-15 12:43:00.636478] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:29.736 [2024-07-15 12:43:00.636520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.995 [2024-07-15 12:43:00.692169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.995 [2024-07-15 12:43:00.771858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.995 [2024-07-15 12:43:00.771894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.995 [2024-07-15 12:43:00.771901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.995 [2024-07-15 12:43:00.771911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.995 [2024-07-15 12:43:00.771931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.995 [2024-07-15 12:43:00.775243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.995 [2024-07-15 12:43:00.775283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.995 [2024-07-15 12:43:00.775390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.995 [2024-07-15 12:43:00.775391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.564 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.564 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:30.564 12:43:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.564 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.564 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.564 12:43:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.823 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:30.823 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.823 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.823 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:30.823 "tick_rate": 2300000000, 00:08:30.823 "poll_groups": [ 00:08:30.823 { 00:08:30.823 "name": "nvmf_tgt_poll_group_000", 00:08:30.823 "admin_qpairs": 0, 00:08:30.823 "io_qpairs": 0, 00:08:30.823 "current_admin_qpairs": 0, 00:08:30.823 "current_io_qpairs": 0, 00:08:30.823 "pending_bdev_io": 0, 00:08:30.823 "completed_nvme_io": 0, 00:08:30.823 "transports": [] 00:08:30.823 }, 00:08:30.823 { 00:08:30.823 "name": "nvmf_tgt_poll_group_001", 00:08:30.823 "admin_qpairs": 0, 00:08:30.823 "io_qpairs": 0, 00:08:30.823 "current_admin_qpairs": 0, 00:08:30.823 "current_io_qpairs": 0, 00:08:30.823 "pending_bdev_io": 0, 00:08:30.823 "completed_nvme_io": 0, 00:08:30.823 "transports": [] 00:08:30.823 }, 00:08:30.823 { 00:08:30.823 "name": "nvmf_tgt_poll_group_002", 00:08:30.823 "admin_qpairs": 0, 00:08:30.823 "io_qpairs": 0, 00:08:30.823 "current_admin_qpairs": 0, 00:08:30.823 "current_io_qpairs": 0, 00:08:30.823 "pending_bdev_io": 0, 00:08:30.823 "completed_nvme_io": 0, 00:08:30.823 "transports": [] 00:08:30.823 }, 00:08:30.823 { 00:08:30.823 "name": "nvmf_tgt_poll_group_003", 00:08:30.823 "admin_qpairs": 0, 00:08:30.823 "io_qpairs": 0, 00:08:30.823 "current_admin_qpairs": 0, 00:08:30.823 "current_io_qpairs": 0, 00:08:30.823 "pending_bdev_io": 0, 00:08:30.823 "completed_nvme_io": 0, 00:08:30.823 "transports": [] 00:08:30.823 } 00:08:30.823 ] 00:08:30.824 }' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.824 [2024-07-15 12:43:01.635719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:30.824 "tick_rate": 2300000000, 00:08:30.824 "poll_groups": [ 00:08:30.824 { 00:08:30.824 "name": "nvmf_tgt_poll_group_000", 00:08:30.824 "admin_qpairs": 0, 00:08:30.824 "io_qpairs": 0, 00:08:30.824 "current_admin_qpairs": 0, 00:08:30.824 "current_io_qpairs": 0, 00:08:30.824 "pending_bdev_io": 0, 00:08:30.824 "completed_nvme_io": 0, 00:08:30.824 "transports": [ 00:08:30.824 { 00:08:30.824 "trtype": "TCP" 00:08:30.824 } 00:08:30.824 ] 00:08:30.824 }, 00:08:30.824 { 00:08:30.824 "name": "nvmf_tgt_poll_group_001", 00:08:30.824 "admin_qpairs": 0, 00:08:30.824 "io_qpairs": 0, 00:08:30.824 "current_admin_qpairs": 0, 00:08:30.824 "current_io_qpairs": 0, 00:08:30.824 "pending_bdev_io": 0, 00:08:30.824 "completed_nvme_io": 0, 00:08:30.824 "transports": [ 00:08:30.824 { 00:08:30.824 "trtype": "TCP" 00:08:30.824 } 00:08:30.824 ] 00:08:30.824 }, 00:08:30.824 { 00:08:30.824 "name": "nvmf_tgt_poll_group_002", 00:08:30.824 "admin_qpairs": 0, 00:08:30.824 "io_qpairs": 0, 00:08:30.824 "current_admin_qpairs": 0, 00:08:30.824 "current_io_qpairs": 0, 00:08:30.824 "pending_bdev_io": 0, 00:08:30.824 "completed_nvme_io": 0, 00:08:30.824 "transports": [ 00:08:30.824 { 00:08:30.824 "trtype": "TCP" 00:08:30.824 } 00:08:30.824 ] 00:08:30.824 }, 00:08:30.824 { 00:08:30.824 "name": "nvmf_tgt_poll_group_003", 00:08:30.824 "admin_qpairs": 0, 00:08:30.824 "io_qpairs": 0, 00:08:30.824 "current_admin_qpairs": 0, 00:08:30.824 "current_io_qpairs": 0, 00:08:30.824 "pending_bdev_io": 0, 00:08:30.824 "completed_nvme_io": 0, 00:08:30.824 "transports": [ 00:08:30.824 { 00:08:30.824 "trtype": "TCP" 00:08:30.824 } 00:08:30.824 ] 00:08:30.824 } 00:08:30.824 ] 00:08:30.824 }' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.824 Malloc1 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.824 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.083 [2024-07-15 12:43:01.803732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:31.083 [2024-07-15 12:43:01.832293] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:31.083 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:31.083 could not add new controller: failed to write to nvme-fabrics device 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.083 12:43:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.020 12:43:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.020 12:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:32.020 12:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.020 12:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:32.020 12:43:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:34.555 12:43:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.555 [2024-07-15 12:43:05.135629] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:34.555 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:34.555 could not add new controller: failed to write to nvme-fabrics device 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.555 12:43:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.493 12:43:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.493 12:43:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.493 12:43:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.493 12:43:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.493 12:43:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:37.397 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 [2024-07-15 12:43:08.435451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.657 12:43:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.035 12:43:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.035 12:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:39.035 12:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.035 12:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:39.035 12:43:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:40.936 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:40.936 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:40.936 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.936 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.937 [2024-07-15 12:43:11.779468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.937 12:43:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.339 12:43:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.339 12:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.339 12:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.339 12:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:42.339 12:43:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:44.238 12:43:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.238 [2024-07-15 12:43:15.056125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.238 12:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.614 12:43:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.614 12:43:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.614 12:43:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.614 12:43:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.614 12:43:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.542 [2024-07-15 12:43:18.338390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.542 12:43:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.918 12:43:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.918 12:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.918 12:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.918 12:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.918 12:43:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.823 [2024-07-15 12:43:21.667824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.823 12:43:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:52.200 12:43:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.200 12:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:52.200 12:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.200 12:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:52.200 12:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 [2024-07-15 12:43:24.999561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 [2024-07-15 12:43:25.047676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.102 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 [2024-07-15 12:43:25.099848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 [2024-07-15 12:43:25.148033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.362 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.363 [2024-07-15 12:43:25.196199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:54.363 "tick_rate": 2300000000, 00:08:54.363 "poll_groups": [ 00:08:54.363 { 00:08:54.363 "name": "nvmf_tgt_poll_group_000", 00:08:54.363 "admin_qpairs": 2, 00:08:54.363 "io_qpairs": 168, 00:08:54.363 "current_admin_qpairs": 0, 00:08:54.363 "current_io_qpairs": 0, 00:08:54.363 "pending_bdev_io": 0, 00:08:54.363 "completed_nvme_io": 256, 00:08:54.363 "transports": [ 00:08:54.363 { 00:08:54.363 "trtype": "TCP" 00:08:54.363 } 00:08:54.363 ] 00:08:54.363 }, 00:08:54.363 { 00:08:54.363 "name": "nvmf_tgt_poll_group_001", 00:08:54.363 "admin_qpairs": 2, 00:08:54.363 "io_qpairs": 168, 00:08:54.363 "current_admin_qpairs": 0, 00:08:54.363 "current_io_qpairs": 0, 00:08:54.363 "pending_bdev_io": 0, 00:08:54.363 "completed_nvme_io": 264, 00:08:54.363 "transports": [ 00:08:54.363 { 00:08:54.363 "trtype": "TCP" 00:08:54.363 } 00:08:54.363 ] 00:08:54.363 }, 00:08:54.363 { 00:08:54.363 "name": "nvmf_tgt_poll_group_002", 00:08:54.363 "admin_qpairs": 1, 00:08:54.363 "io_qpairs": 168, 00:08:54.363 "current_admin_qpairs": 0, 00:08:54.363 "current_io_qpairs": 0, 00:08:54.363 "pending_bdev_io": 0, 00:08:54.363 "completed_nvme_io": 272, 00:08:54.363 "transports": [ 00:08:54.363 { 00:08:54.363 "trtype": "TCP" 00:08:54.363 } 00:08:54.363 ] 00:08:54.363 }, 00:08:54.363 { 00:08:54.363 "name": "nvmf_tgt_poll_group_003", 00:08:54.363 "admin_qpairs": 2, 00:08:54.363 "io_qpairs": 168, 00:08:54.363 "current_admin_qpairs": 0, 00:08:54.363 "current_io_qpairs": 0, 00:08:54.363 "pending_bdev_io": 0, 00:08:54.363 "completed_nvme_io": 230, 00:08:54.363 "transports": [ 00:08:54.363 { 00:08:54.363 "trtype": "TCP" 00:08:54.363 } 00:08:54.363 ] 00:08:54.363 } 00:08:54.363 ] 00:08:54.363 }' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:54.363 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.622 rmmod nvme_tcp 00:08:54.622 rmmod nvme_fabrics 00:08:54.622 rmmod nvme_keyring 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1587922 ']' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1587922 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1587922 ']' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1587922 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1587922 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1587922' 00:08:54.622 killing process with pid 1587922 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1587922 00:08:54.622 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1587922 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.881 12:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.786 12:43:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.786 00:08:56.786 real 0m33.058s 00:08:56.786 user 1m41.035s 00:08:56.786 sys 0m6.091s 00:08:56.786 12:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.786 12:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.786 ************************************ 00:08:56.786 END TEST nvmf_rpc 00:08:56.786 ************************************ 00:08:57.047 12:43:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:57.047 12:43:27 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:57.047 12:43:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.047 12:43:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.047 12:43:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.047 ************************************ 00:08:57.047 START TEST nvmf_invalid 00:08:57.047 ************************************ 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:57.047 * Looking for test storage... 00:08:57.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.047 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.048 12:43:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.612 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:03.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:03.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:03.613 Found net devices under 0000:86:00.0: cvl_0_0 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:03.613 Found net devices under 0000:86:00.1: cvl_0_1 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:09:03.613 00:09:03.613 --- 10.0.0.2 ping statistics --- 00:09:03.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.613 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:09:03.613 00:09:03.613 --- 10.0.0.1 ping statistics --- 00:09:03.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.613 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1595537 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1595537 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1595537 ']' 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.613 12:43:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.613 [2024-07-15 12:43:33.752357] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:03.613 [2024-07-15 12:43:33.752405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.613 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.613 [2024-07-15 12:43:33.827275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.613 [2024-07-15 12:43:33.907142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.613 [2024-07-15 12:43:33.907177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.613 [2024-07-15 12:43:33.907184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.613 [2024-07-15 12:43:33.907190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.613 [2024-07-15 12:43:33.907196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.613 [2024-07-15 12:43:33.907252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.613 [2024-07-15 12:43:33.907299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.613 [2024-07-15 12:43:33.907417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.613 [2024-07-15 12:43:33.907418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.613 12:43:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.613 12:43:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:03.613 12:43:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.613 12:43:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.613 12:43:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13342 00:09:03.883 [2024-07-15 12:43:34.758644] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:03.883 { 00:09:03.883 "nqn": "nqn.2016-06.io.spdk:cnode13342", 00:09:03.883 "tgt_name": "foobar", 00:09:03.883 "method": "nvmf_create_subsystem", 00:09:03.883 "req_id": 1 00:09:03.883 } 00:09:03.883 Got JSON-RPC error response 00:09:03.883 response: 00:09:03.883 { 00:09:03.883 "code": -32603, 00:09:03.883 "message": "Unable to find target foobar" 00:09:03.883 }' 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:03.883 { 00:09:03.883 "nqn": "nqn.2016-06.io.spdk:cnode13342", 00:09:03.883 "tgt_name": "foobar", 00:09:03.883 "method": "nvmf_create_subsystem", 00:09:03.883 "req_id": 1 00:09:03.883 } 00:09:03.883 Got JSON-RPC error response 00:09:03.883 response: 00:09:03.883 { 00:09:03.883 "code": -32603, 00:09:03.883 "message": "Unable to find target foobar" 00:09:03.883 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:03.883 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2686 00:09:04.222 [2024-07-15 12:43:34.943317] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2686: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:04.222 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:04.222 { 00:09:04.222 "nqn": "nqn.2016-06.io.spdk:cnode2686", 00:09:04.222 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:04.222 "method": "nvmf_create_subsystem", 00:09:04.222 "req_id": 1 00:09:04.222 } 00:09:04.222 Got JSON-RPC error response 00:09:04.222 response: 00:09:04.222 { 00:09:04.222 "code": -32602, 00:09:04.222 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:04.222 }' 00:09:04.222 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:04.222 { 00:09:04.222 "nqn": "nqn.2016-06.io.spdk:cnode2686", 00:09:04.222 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:04.222 "method": "nvmf_create_subsystem", 00:09:04.222 "req_id": 1 00:09:04.222 } 00:09:04.222 Got JSON-RPC error response 00:09:04.222 response: 00:09:04.222 { 00:09:04.222 "code": -32602, 00:09:04.222 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:04.222 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:04.222 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:04.222 12:43:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14641 00:09:04.222 [2024-07-15 12:43:35.135959] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14641: invalid model number 'SPDK_Controller' 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:04.222 { 00:09:04.222 "nqn": "nqn.2016-06.io.spdk:cnode14641", 00:09:04.222 "model_number": "SPDK_Controller\u001f", 00:09:04.222 "method": "nvmf_create_subsystem", 00:09:04.222 "req_id": 1 00:09:04.222 } 00:09:04.222 Got JSON-RPC error response 00:09:04.222 response: 00:09:04.222 { 00:09:04.222 "code": -32602, 00:09:04.222 "message": "Invalid MN SPDK_Controller\u001f" 00:09:04.222 }' 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:04.222 { 00:09:04.222 "nqn": "nqn.2016-06.io.spdk:cnode14641", 00:09:04.222 "model_number": "SPDK_Controller\u001f", 00:09:04.222 "method": "nvmf_create_subsystem", 00:09:04.222 "req_id": 1 00:09:04.222 } 00:09:04.222 Got JSON-RPC error response 00:09:04.222 response: 00:09:04.222 { 00:09:04.222 "code": -32602, 00:09:04.222 "message": "Invalid MN SPDK_Controller\u001f" 00:09:04.222 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:04.222 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.481 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '`i3c6n1>&Wu7v)@9`}2NW' 00:09:04.482 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '`i3c6n1>&Wu7v)@9`}2NW' nqn.2016-06.io.spdk:cnode23594 00:09:04.742 [2024-07-15 12:43:35.465049] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23594: invalid serial number '`i3c6n1>&Wu7v)@9`}2NW' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:04.742 { 00:09:04.742 "nqn": "nqn.2016-06.io.spdk:cnode23594", 00:09:04.742 "serial_number": "`i3c6n1>&Wu7v)@9`}2NW", 00:09:04.742 "method": "nvmf_create_subsystem", 00:09:04.742 "req_id": 1 00:09:04.742 } 00:09:04.742 Got JSON-RPC error response 00:09:04.742 response: 00:09:04.742 { 00:09:04.742 "code": -32602, 00:09:04.742 "message": "Invalid SN `i3c6n1>&Wu7v)@9`}2NW" 00:09:04.742 }' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:04.742 { 00:09:04.742 "nqn": "nqn.2016-06.io.spdk:cnode23594", 00:09:04.742 "serial_number": "`i3c6n1>&Wu7v)@9`}2NW", 00:09:04.742 "method": "nvmf_create_subsystem", 00:09:04.742 "req_id": 1 00:09:04.742 } 00:09:04.742 Got JSON-RPC error response 00:09:04.742 response: 00:09:04.742 { 00:09:04.742 "code": -32602, 00:09:04.742 "message": "Invalid SN `i3c6n1>&Wu7v)@9`}2NW" 00:09:04.742 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.742 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:04.743 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8' nqn.2016-06.io.spdk:cnode31693 00:09:05.003 [2024-07-15 12:43:35.898482] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31693: invalid model number 'n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:05.003 { 00:09:05.003 "nqn": "nqn.2016-06.io.spdk:cnode31693", 00:09:05.003 "model_number": "n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8", 00:09:05.003 "method": "nvmf_create_subsystem", 00:09:05.003 "req_id": 1 00:09:05.003 } 00:09:05.003 Got JSON-RPC error response 00:09:05.003 response: 00:09:05.003 { 00:09:05.003 "code": -32602, 00:09:05.003 "message": "Invalid MN n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8" 00:09:05.003 }' 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:05.003 { 00:09:05.003 "nqn": "nqn.2016-06.io.spdk:cnode31693", 00:09:05.003 "model_number": "n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8", 00:09:05.003 "method": "nvmf_create_subsystem", 00:09:05.003 "req_id": 1 00:09:05.003 } 00:09:05.003 Got JSON-RPC error response 00:09:05.003 response: 00:09:05.003 { 00:09:05.003 "code": -32602, 00:09:05.003 "message": "Invalid MN n8Z?kUzVc%0rSo=|h_SI/Is%X)I-;pre)@aV,M/Q8" 00:09:05.003 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:05.003 12:43:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:05.262 [2024-07-15 12:43:36.083208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.262 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:05.521 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:05.521 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:05.521 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:05.521 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:05.521 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:05.521 [2024-07-15 12:43:36.460421] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:05.779 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:05.779 { 00:09:05.779 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:05.779 "listen_address": { 00:09:05.779 "trtype": "tcp", 00:09:05.779 "traddr": "", 00:09:05.779 "trsvcid": "4421" 00:09:05.779 }, 00:09:05.779 "method": "nvmf_subsystem_remove_listener", 00:09:05.779 "req_id": 1 00:09:05.779 } 00:09:05.779 Got JSON-RPC error response 00:09:05.779 response: 00:09:05.779 { 00:09:05.779 "code": -32602, 00:09:05.779 "message": "Invalid parameters" 00:09:05.779 }' 00:09:05.779 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:05.779 { 00:09:05.779 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:05.779 "listen_address": { 00:09:05.779 "trtype": "tcp", 00:09:05.779 "traddr": "", 00:09:05.779 "trsvcid": "4421" 00:09:05.779 }, 00:09:05.779 "method": "nvmf_subsystem_remove_listener", 00:09:05.779 "req_id": 1 00:09:05.779 } 00:09:05.779 Got JSON-RPC error response 00:09:05.779 response: 00:09:05.779 { 00:09:05.779 "code": -32602, 00:09:05.779 "message": "Invalid parameters" 00:09:05.779 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:05.779 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28782 -i 0 00:09:05.779 [2024-07-15 12:43:36.653035] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28782: invalid cntlid range [0-65519] 00:09:05.779 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:05.779 { 00:09:05.779 "nqn": "nqn.2016-06.io.spdk:cnode28782", 00:09:05.779 "min_cntlid": 0, 00:09:05.779 "method": "nvmf_create_subsystem", 00:09:05.779 "req_id": 1 00:09:05.779 } 00:09:05.779 Got JSON-RPC error response 00:09:05.779 response: 00:09:05.779 { 00:09:05.779 "code": -32602, 00:09:05.779 "message": "Invalid cntlid range [0-65519]" 00:09:05.779 }' 00:09:05.779 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:05.779 { 00:09:05.779 "nqn": "nqn.2016-06.io.spdk:cnode28782", 00:09:05.779 "min_cntlid": 0, 00:09:05.779 "method": "nvmf_create_subsystem", 00:09:05.779 "req_id": 1 00:09:05.779 } 00:09:05.779 Got JSON-RPC error response 00:09:05.779 response: 00:09:05.779 { 00:09:05.779 "code": -32602, 00:09:05.779 "message": "Invalid cntlid range [0-65519]" 00:09:05.779 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:05.779 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27340 -i 65520 00:09:06.040 [2024-07-15 12:43:36.849712] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27340: invalid cntlid range [65520-65519] 00:09:06.040 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:06.040 { 00:09:06.040 "nqn": "nqn.2016-06.io.spdk:cnode27340", 00:09:06.040 "min_cntlid": 65520, 00:09:06.040 "method": "nvmf_create_subsystem", 00:09:06.040 "req_id": 1 00:09:06.040 } 00:09:06.040 Got JSON-RPC error response 00:09:06.040 response: 00:09:06.040 { 00:09:06.040 "code": -32602, 00:09:06.040 "message": "Invalid cntlid range [65520-65519]" 00:09:06.040 }' 00:09:06.040 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:06.040 { 00:09:06.040 "nqn": "nqn.2016-06.io.spdk:cnode27340", 00:09:06.040 "min_cntlid": 65520, 00:09:06.040 "method": "nvmf_create_subsystem", 00:09:06.040 "req_id": 1 00:09:06.040 } 00:09:06.040 Got JSON-RPC error response 00:09:06.040 response: 00:09:06.040 { 00:09:06.040 "code": -32602, 00:09:06.040 "message": "Invalid cntlid range [65520-65519]" 00:09:06.040 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.040 12:43:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32214 -I 0 00:09:06.300 [2024-07-15 12:43:37.034403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32214: invalid cntlid range [1-0] 00:09:06.300 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:06.300 { 00:09:06.300 "nqn": "nqn.2016-06.io.spdk:cnode32214", 00:09:06.300 "max_cntlid": 0, 00:09:06.300 "method": "nvmf_create_subsystem", 00:09:06.300 "req_id": 1 00:09:06.300 } 00:09:06.300 Got JSON-RPC error response 00:09:06.300 response: 00:09:06.300 { 00:09:06.300 "code": -32602, 00:09:06.300 "message": "Invalid cntlid range [1-0]" 00:09:06.300 }' 00:09:06.300 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:06.300 { 00:09:06.300 "nqn": "nqn.2016-06.io.spdk:cnode32214", 00:09:06.300 "max_cntlid": 0, 00:09:06.300 "method": "nvmf_create_subsystem", 00:09:06.300 "req_id": 1 00:09:06.300 } 00:09:06.300 Got JSON-RPC error response 00:09:06.300 response: 00:09:06.300 { 00:09:06.300 "code": -32602, 00:09:06.300 "message": "Invalid cntlid range [1-0]" 00:09:06.300 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.300 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11233 -I 65520 00:09:06.300 [2024-07-15 12:43:37.223012] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11233: invalid cntlid range [1-65520] 00:09:06.300 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:06.300 { 00:09:06.300 "nqn": "nqn.2016-06.io.spdk:cnode11233", 00:09:06.300 "max_cntlid": 65520, 00:09:06.300 "method": "nvmf_create_subsystem", 00:09:06.300 "req_id": 1 00:09:06.300 } 00:09:06.300 Got JSON-RPC error response 00:09:06.300 response: 00:09:06.300 { 00:09:06.300 "code": -32602, 00:09:06.300 "message": "Invalid cntlid range [1-65520]" 00:09:06.300 }' 00:09:06.300 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:06.300 { 00:09:06.300 "nqn": "nqn.2016-06.io.spdk:cnode11233", 00:09:06.300 "max_cntlid": 65520, 00:09:06.300 "method": "nvmf_create_subsystem", 00:09:06.300 "req_id": 1 00:09:06.300 } 00:09:06.300 Got JSON-RPC error response 00:09:06.300 response: 00:09:06.300 { 00:09:06.300 "code": -32602, 00:09:06.300 "message": "Invalid cntlid range [1-65520]" 00:09:06.300 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.300 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6443 -i 6 -I 5 00:09:06.559 [2024-07-15 12:43:37.407654] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6443: invalid cntlid range [6-5] 00:09:06.559 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:06.559 { 00:09:06.559 "nqn": "nqn.2016-06.io.spdk:cnode6443", 00:09:06.559 "min_cntlid": 6, 00:09:06.559 "max_cntlid": 5, 00:09:06.559 "method": "nvmf_create_subsystem", 00:09:06.559 "req_id": 1 00:09:06.559 } 00:09:06.559 Got JSON-RPC error response 00:09:06.559 response: 00:09:06.559 { 00:09:06.559 "code": -32602, 00:09:06.559 "message": "Invalid cntlid range [6-5]" 00:09:06.559 }' 00:09:06.559 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:06.559 { 00:09:06.559 "nqn": "nqn.2016-06.io.spdk:cnode6443", 00:09:06.559 "min_cntlid": 6, 00:09:06.559 "max_cntlid": 5, 00:09:06.559 "method": "nvmf_create_subsystem", 00:09:06.559 "req_id": 1 00:09:06.559 } 00:09:06.559 Got JSON-RPC error response 00:09:06.559 response: 00:09:06.559 { 00:09:06.559 "code": -32602, 00:09:06.559 "message": "Invalid cntlid range [6-5]" 00:09:06.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.559 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:06.819 { 00:09:06.819 "name": "foobar", 00:09:06.819 "method": "nvmf_delete_target", 00:09:06.819 "req_id": 1 00:09:06.819 } 00:09:06.819 Got JSON-RPC error response 00:09:06.819 response: 00:09:06.819 { 00:09:06.819 "code": -32602, 00:09:06.819 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:06.819 }' 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:06.819 { 00:09:06.819 "name": "foobar", 00:09:06.819 "method": "nvmf_delete_target", 00:09:06.819 "req_id": 1 00:09:06.819 } 00:09:06.819 Got JSON-RPC error response 00:09:06.819 response: 00:09:06.819 { 00:09:06.819 "code": -32602, 00:09:06.819 "message": "The specified target doesn't exist, cannot delete it." 00:09:06.819 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.819 rmmod nvme_tcp 00:09:06.819 rmmod nvme_fabrics 00:09:06.819 rmmod nvme_keyring 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1595537 ']' 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1595537 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1595537 ']' 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1595537 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1595537 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1595537' 00:09:06.819 killing process with pid 1595537 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1595537 00:09:06.819 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1595537 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.082 12:43:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.987 12:43:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.988 00:09:08.988 real 0m12.086s 00:09:08.988 user 0m19.561s 00:09:08.988 sys 0m5.298s 00:09:08.988 12:43:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.988 12:43:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 ************************************ 00:09:08.988 END TEST nvmf_invalid 00:09:08.988 ************************************ 00:09:08.988 12:43:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.988 12:43:39 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:08.988 12:43:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.988 12:43:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.988 12:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.247 ************************************ 00:09:09.247 START TEST nvmf_abort 00:09:09.247 ************************************ 00:09:09.247 12:43:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:09.247 * Looking for test storage... 00:09:09.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.247 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.248 12:43:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:15.818 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:15.818 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.818 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:15.819 Found net devices under 0000:86:00.0: cvl_0_0 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:15.819 Found net devices under 0000:86:00.1: cvl_0_1 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:09:15.819 00:09:15.819 --- 10.0.0.2 ping statistics --- 00:09:15.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.819 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:09:15.819 00:09:15.819 --- 10.0.0.1 ping statistics --- 00:09:15.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.819 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1599925 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1599925 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1599925 ']' 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.819 12:43:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.819 [2024-07-15 12:43:45.882514] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:15.819 [2024-07-15 12:43:45.882557] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.819 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.819 [2024-07-15 12:43:45.954105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.819 [2024-07-15 12:43:46.032924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.819 [2024-07-15 12:43:46.032970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.819 [2024-07-15 12:43:46.032978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.819 [2024-07-15 12:43:46.032984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.819 [2024-07-15 12:43:46.032989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.819 [2024-07-15 12:43:46.033107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.819 [2024-07-15 12:43:46.033231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.819 [2024-07-15 12:43:46.033251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.819 [2024-07-15 12:43:46.745974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.819 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 Malloc0 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 Delay0 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 [2024-07-15 12:43:46.817096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.078 12:43:46 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.078 [2024-07-15 12:43:46.979379] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:18.612 Initializing NVMe Controllers 00:09:18.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:18.612 controller IO queue size 128 less than required 00:09:18.612 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:18.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:18.612 Initialization complete. Launching workers. 00:09:18.612 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42845 00:09:18.612 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42906, failed to submit 62 00:09:18.612 success 42849, unsuccess 57, failed 0 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.612 rmmod nvme_tcp 00:09:18.612 rmmod nvme_fabrics 00:09:18.612 rmmod nvme_keyring 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1599925 ']' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1599925 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1599925 ']' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1599925 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1599925 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1599925' 00:09:18.612 killing process with pid 1599925 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1599925 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1599925 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.612 12:43:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.518 12:43:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.518 00:09:20.518 real 0m11.479s 00:09:20.518 user 0m13.381s 00:09:20.518 sys 0m5.254s 00:09:20.518 12:43:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.518 12:43:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.518 ************************************ 00:09:20.518 END TEST nvmf_abort 00:09:20.518 ************************************ 00:09:20.777 12:43:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.777 12:43:51 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:20.777 12:43:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.777 12:43:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.777 12:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.777 ************************************ 00:09:20.777 START TEST nvmf_ns_hotplug_stress 00:09:20.777 ************************************ 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:20.777 * Looking for test storage... 00:09:20.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.777 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.778 12:43:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:27.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:27.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:27.342 Found net devices under 0000:86:00.0: cvl_0_0 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:27.342 Found net devices under 0000:86:00.1: cvl_0_1 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:09:27.342 00:09:27.342 --- 10.0.0.2 ping statistics --- 00:09:27.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.342 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:27.342 00:09:27.342 --- 10.0.0.1 ping statistics --- 00:09:27.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.342 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:27.342 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1603927 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1603927 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1603927 ']' 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.343 12:43:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.343 [2024-07-15 12:43:57.506085] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:27.343 [2024-07-15 12:43:57.506127] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.343 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.343 [2024-07-15 12:43:57.556850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.343 [2024-07-15 12:43:57.627960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.343 [2024-07-15 12:43:57.628002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.343 [2024-07-15 12:43:57.628008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.343 [2024-07-15 12:43:57.628014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.343 [2024-07-15 12:43:57.628019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.343 [2024-07-15 12:43:57.628145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.343 [2024-07-15 12:43:57.628273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.343 [2024-07-15 12:43:57.628273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.601 [2024-07-15 12:43:58.505450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.601 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.880 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.158 [2024-07-15 12:43:58.878845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.158 12:43:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.158 12:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:28.416 Malloc0 00:09:28.416 12:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.675 Delay0 00:09:28.675 12:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.933 12:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:28.933 NULL1 00:09:28.933 12:43:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:29.191 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1604421 00:09:29.191 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:29.192 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:29.192 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.192 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.450 Read completed with error (sct=0, sc=11) 00:09:29.450 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.708 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:29.708 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:29.708 true 00:09:29.708 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:29.708 12:44:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.643 12:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.902 12:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:30.902 12:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:30.902 true 00:09:30.903 12:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:30.903 12:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.161 12:44:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.420 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:31.420 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:31.420 true 00:09:31.420 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:31.420 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.678 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.959 [2024-07-15 12:44:02.700541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.700963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.701994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.702986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.703980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.959 [2024-07-15 12:44:02.704029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.704966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.705970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.706957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.707969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.960 [2024-07-15 12:44:02.708935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.708977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.709377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.710967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.711971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.712968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.713966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.961 [2024-07-15 12:44:02.714264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.714957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.715595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.716971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.717994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.718986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.962 [2024-07-15 12:44:02.719471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.719984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.720964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.721976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.722990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 [2024-07-15 12:44:02.723804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.963 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:31.964 [2024-07-15 12:44:02.723848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.723895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.723941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.723985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.724996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.725993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.726038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.726852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.726890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.726929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.726968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.727996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.964 [2024-07-15 12:44:02.728975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.729998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.730461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:31.965 [2024-07-15 12:44:02.731280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:31.965 [2024-07-15 12:44:02.731614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.731992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.732981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.733854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.965 [2024-07-15 12:44:02.734317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.734979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.735964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.736968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.737992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.738977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.966 [2024-07-15 12:44:02.739500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.739984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.740959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.741957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.742997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.743983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.967 [2024-07-15 12:44:02.744397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.744961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.745871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.746983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.747981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.748971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.968 [2024-07-15 12:44:02.749782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.749827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.749876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.749921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.749973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.750990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.751992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.752035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.752086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.752891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.752942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.752988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.753964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.969 [2024-07-15 12:44:02.754926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.754964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.755977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.756549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.757976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.758960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.759964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.970 [2024-07-15 12:44:02.760250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.760999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.761966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.762652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.763978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.764959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.971 [2024-07-15 12:44:02.765328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.765965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.766957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.767965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.768964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.769988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.770031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.770070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.972 [2024-07-15 12:44:02.770109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.770980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.771964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.772982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.773265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:31.973 [2024-07-15 12:44:02.774137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.774990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.973 [2024-07-15 12:44:02.775744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.775791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.775839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.775882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.775927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.775980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.776877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.777991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.778986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.779738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.780990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.781034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.781079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.781121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.974 [2024-07-15 12:44:02.781164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.781996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.782988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.783978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.784958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.785965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.786008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.786052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.786086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.786123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.786167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.975 [2024-07-15 12:44:02.786209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.786976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.787954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.788993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.789993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.976 [2024-07-15 12:44:02.790888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.790934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.790973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.791976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.792999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.793572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.794992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.795984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.796025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.796064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.796102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.796147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.977 [2024-07-15 12:44:02.796196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.796974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.797956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.798965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.799971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.800971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.978 [2024-07-15 12:44:02.801452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.801969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.802990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.803969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.804006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.804046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.804855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.804901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.804939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.804978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.805965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.979 [2024-07-15 12:44:02.806709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.806756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.806802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.806848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.806900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.806944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.806991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.807997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.808978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.809989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.810998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.811043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.811094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.811138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.811182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.980 [2024-07-15 12:44:02.811239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.811996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.812996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.813977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.814664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.815967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.981 [2024-07-15 12:44:02.816826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.816872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.816919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.816964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.817984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.818963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.819986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.820975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.821020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.821066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.821119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.821164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.821982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.982 [2024-07-15 12:44:02.822348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.822965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.823982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.824993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:31.983 [2024-07-15 12:44:02.825448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.825972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.826955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.983 [2024-07-15 12:44:02.827448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.827976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.828971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.829962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.830957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.831988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.984 [2024-07-15 12:44:02.832380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.832962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.833973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.834965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.835997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.836994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.837039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.985 [2024-07-15 12:44:02.837084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.837422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.838993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.839992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.840959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.841961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.842994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.843031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.986 [2024-07-15 12:44:02.843070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.843969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.844966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.845976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.846969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.847974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.987 [2024-07-15 12:44:02.848018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.848999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.849972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.850974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.851564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.852992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.988 [2024-07-15 12:44:02.853861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.853909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.853954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.854977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.855983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.856965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.857993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.989 [2024-07-15 12:44:02.858326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.858372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.859990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.860994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.861963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.862975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.990 [2024-07-15 12:44:02.863926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.863967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.864968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.865010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.865051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.865818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.865866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.865910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.865953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.866972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.867954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.868977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.991 [2024-07-15 12:44:02.869441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.869996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.870988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.871631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.872978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.873955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.992 [2024-07-15 12:44:02.874865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.874914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.874960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.875979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.876994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.877999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.878046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.878092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.878142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.878186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:31.993 [2024-07-15 12:44:02.878963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.993 [2024-07-15 12:44:02.879750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.879789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.879826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.879869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.879906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.879949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.879989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.880948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.881964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.882609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.883970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.884957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.885003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.994 [2024-07-15 12:44:02.885048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.885908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.886972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.887958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.888977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.889958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.890001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.890042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.890085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.890125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.890165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.995 [2024-07-15 12:44:02.890207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.890999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.891998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.892041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.892090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.892137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.892179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:31.996 [2024-07-15 12:44:02.892213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.892866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.892912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.892957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 true 00:09:32.276 [2024-07-15 12:44:02.893739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.893979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.894025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.894069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.894118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.894167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.276 [2024-07-15 12:44:02.894215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.894947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.895979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.896984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.897843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.898999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.277 [2024-07-15 12:44:02.899650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.899960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.900973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.901961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.902962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.903982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.904019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.904056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.904098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.904142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.904183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.905009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.905060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.278 [2024-07-15 12:44:02.905103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.905991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.906979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.907958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.908716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.909992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.910035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.910072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.910106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.910150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.910192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.279 [2024-07-15 12:44:02.910238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.910969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.911882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.912995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.913965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.914008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.280 [2024-07-15 12:44:02.914049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.914722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.915957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:32.281 [2024-07-15 12:44:02.916374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 12:44:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.281 [2024-07-15 12:44:02.916694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.916972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.917968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.918974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.281 [2024-07-15 12:44:02.919011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.919984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.920994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.921981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.922962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.923972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.924012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.924049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.282 [2024-07-15 12:44:02.924093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.924489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.925985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.926969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.927979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.283 [2024-07-15 12:44:02.928642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.928965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.283 [2024-07-15 12:44:02.929585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.929966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.930855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.931985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.932986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.933968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.934995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.935044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.284 [2024-07-15 12:44:02.935088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.935965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.936960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.937953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.938966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.285 [2024-07-15 12:44:02.939582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.939986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.940975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.941389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.942960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.943972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.286 [2024-07-15 12:44:02.944982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.945804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.946980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.947995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.948974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.949994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.287 [2024-07-15 12:44:02.950286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.950973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.951959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.952994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.953985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.954962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.288 [2024-07-15 12:44:02.955643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.955973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.956982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.957964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.958973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.959981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.289 [2024-07-15 12:44:02.960312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.960966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.961991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.962978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.963993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.964990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.290 [2024-07-15 12:44:02.965618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.965988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.966967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.967014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.967058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.967101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.967146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.967200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.968987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.969959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.970974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.291 [2024-07-15 12:44:02.971018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.971650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.972993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.973981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.974913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.975962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.292 [2024-07-15 12:44:02.976311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.976986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.977757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.978958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.979961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.293 [2024-07-15 12:44:02.980931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.980975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.294 [2024-07-15 12:44:02.981550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.981971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.982976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.983959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.984976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.985983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.986024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.986061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.986102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.986146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.294 [2024-07-15 12:44:02.986184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.986977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.987979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.988369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.989963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.990971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.991011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.991052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.991092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.991134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.295 [2024-07-15 12:44:02.991176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.991851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.992667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.993991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.994968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.995919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.296 [2024-07-15 12:44:02.996678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.996987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.997963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.998962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.999830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.999871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.999910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.999948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:02.999995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.000981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.297 [2024-07-15 12:44:03.001803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.001841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.001880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.001923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.001964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.002968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.003992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.004978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.005984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.298 [2024-07-15 12:44:03.006297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.006956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.007981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.008975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.009572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.010980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.299 [2024-07-15 12:44:03.011823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.011857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.011896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.011936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.011976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.012971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.013952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.014966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.015997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.016969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.300 [2024-07-15 12:44:03.017015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.017980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.018985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.019996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.020041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.020086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.020134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.020177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.020234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.021972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.301 [2024-07-15 12:44:03.022428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.022988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.023983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.024714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.025959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.302 [2024-07-15 12:44:03.026966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.027831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.028985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.029952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.030000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.030046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.030087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.030135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.303 [2024-07-15 12:44:03.030879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.030925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.030966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.031956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.303 [2024-07-15 12:44:03.032432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.032984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.033966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.034508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.035966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.036956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.304 [2024-07-15 12:44:03.037707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.037757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.037790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.037828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.037865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.038959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.039999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.040958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.041993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.042993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.305 [2024-07-15 12:44:03.043036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.043973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.044981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.045975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.046961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.047717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.048006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.306 [2024-07-15 12:44:03.048048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.048983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.049993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.050863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.051987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.052978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.307 [2024-07-15 12:44:03.053534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.053996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.054982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.055993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.056986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.057406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.058960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.059002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.059040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.308 [2024-07-15 12:44:03.059082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.059970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.060997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.061959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.062965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.063972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.064017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.064067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.309 [2024-07-15 12:44:03.064112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.064907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.064953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.064996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.065962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.066995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.067993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.068989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.310 [2024-07-15 12:44:03.069364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.069974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.070988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.071990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.072987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.311 [2024-07-15 12:44:03.073654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.073991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.074601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.075992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.076984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.077960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.078003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.078045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.078085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.078130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.078168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 [2024-07-15 12:44:03.078208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.312 12:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.596 [2024-07-15 12:44:03.275904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.275958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.275996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.276986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.596 [2024-07-15 12:44:03.277494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.277988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.278963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.279909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.280984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.281984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.597 [2024-07-15 12:44:03.282729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.282998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.283967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.284975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.285993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.286031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.286080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.286927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.286979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.287969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.288021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.288066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.288115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.288163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.598 [2024-07-15 12:44:03.288208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.288973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.289962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.290967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.291982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.292557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.293432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.293488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.293534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.293577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.293620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.599 [2024-07-15 12:44:03.293667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.293713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.293759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.293813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.293857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.293902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.293955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.294954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.295957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.296960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.297956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.298001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.298052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.600 [2024-07-15 12:44:03.298094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.298982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.299981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.300969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.301962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.302696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.303156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.303195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.303238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.303280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.601 [2024-07-15 12:44:03.303321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.303986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.304972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.305875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.306705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 12:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:32.602 12:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:32.602 [2024-07-15 12:44:03.307452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.307964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.602 [2024-07-15 12:44:03.308571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.308982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.309980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.310977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.311971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.312394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.603 [2024-07-15 12:44:03.313958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.313988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.314992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.315846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.316966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.317390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.318987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.604 [2024-07-15 12:44:03.319280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.319974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.320913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.605 [2024-07-15 12:44:03.320958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.321998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.322974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.605 [2024-07-15 12:44:03.323292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.323646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.324981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.325991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.326984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.327982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.606 [2024-07-15 12:44:03.328861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.328905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.328952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.328999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.329964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.330981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.331992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.332974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.333475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.607 [2024-07-15 12:44:03.334528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.334958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.335991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.336971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.337980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.338819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.608 [2024-07-15 12:44:03.339928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.339977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.340975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.341974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.342975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.343536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.609 [2024-07-15 12:44:03.344718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.344999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.345978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.346992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.347994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.348992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.349034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.349072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.349113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.610 [2024-07-15 12:44:03.349157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.349563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.350946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.351993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.352979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.353958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.354960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.355001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.355043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.355083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.611 [2024-07-15 12:44:03.355127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.355973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.356985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.357969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.358981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.612 [2024-07-15 12:44:03.359731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.359773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.359814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.359853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.359891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.359931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.359970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.360009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.360046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.360087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.360900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.360949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.360998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.361999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.362959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.363964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.613 [2024-07-15 12:44:03.364522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.364985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.365970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.366268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.367972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.368991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.369902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.614 [2024-07-15 12:44:03.370096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.615 [2024-07-15 12:44:03.370508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.370996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.371981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.372774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.373965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.374984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.615 [2024-07-15 12:44:03.375365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.375981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.376962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.377962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.378983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.379970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.616 [2024-07-15 12:44:03.380363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.380979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.381974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.382472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.383983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.617 [2024-07-15 12:44:03.384752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.384793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.384830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.384865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.384906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.384954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.384998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.385982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.386961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.387990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.618 [2024-07-15 12:44:03.388719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.388759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.388791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.388837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.388876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.389962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.390978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.391987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.392029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.392069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.392109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.392146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.392184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.392390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.619 [2024-07-15 12:44:03.393992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.394999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.395852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.396970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.397982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.620 [2024-07-15 12:44:03.398683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.398723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.399958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.400978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.401994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.402979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.403958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.404012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.404059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.404107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.404157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.621 [2024-07-15 12:44:03.404196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.404983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.405973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.406974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.407992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.622 [2024-07-15 12:44:03.408036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.408964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.409332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.410954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.411966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.623 [2024-07-15 12:44:03.412777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.412824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.412864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.413962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.414990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.415790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.416994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.417972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.418011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.418053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.418099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.418139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.418180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.624 [2024-07-15 12:44:03.418220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.418991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.419983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.625 [2024-07-15 12:44:03.420818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.420996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.421967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.625 [2024-07-15 12:44:03.422919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.422968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.423977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.424993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.425986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.426515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.427976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.428020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.428050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.428088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.428132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.428175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.626 [2024-07-15 12:44:03.428217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.428959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.429959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.430963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.431968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.627 [2024-07-15 12:44:03.432746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.432795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.432840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.432887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.432942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.432991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.433970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.434988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.435998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.436611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.437981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.628 [2024-07-15 12:44:03.438319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.438965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.439966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.440986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.441819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.442954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.629 [2024-07-15 12:44:03.443517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.443965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.444976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.445963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.446631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.447962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.630 [2024-07-15 12:44:03.448889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.448933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.448978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.449970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.450982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.451606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.452960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.631 [2024-07-15 12:44:03.453696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.453744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.453790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.453837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.453884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.453932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.453980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.454983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.455996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.456766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.457975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.458965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.632 [2024-07-15 12:44:03.459015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.459971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.460981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.461988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.462508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.463971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.633 [2024-07-15 12:44:03.464516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.464987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.465990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.466961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.467986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.468988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.469028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.469830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.469881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.469926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.469979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.634 [2024-07-15 12:44:03.470024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.470964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.471969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.472964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:32.635 [2024-07-15 12:44:03.473202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.473474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.635 [2024-07-15 12:44:03.474509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.474967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.475990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.476984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.477963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.478991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.479033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.636 [2024-07-15 12:44:03.479078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.479595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.480957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 true 00:09:32.637 [2024-07-15 12:44:03.481114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.481990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.482961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.483952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.637 [2024-07-15 12:44:03.484655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.484989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.485990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.486038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.486872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.486928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.486977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.487984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.488999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.638 [2024-07-15 12:44:03.489924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.489972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.490539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.491981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.492982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.639 [2024-07-15 12:44:03.493762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.493965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.494998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.495994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.496643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.497967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.640 [2024-07-15 12:44:03.498510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.498964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.499951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.500995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.501965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.502995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.503042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.503080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.641 [2024-07-15 12:44:03.503119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 12:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:32.642 [2024-07-15 12:44:03.504716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.504987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 12:44:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.642 [2024-07-15 12:44:03.505031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.505984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.506977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.507768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.508983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.509033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.509073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:32.642 [2024-07-15 12:44:03.509113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:33.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.839 12:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:33.839 12:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:33.839 12:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:34.098 true 00:09:34.098 12:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:34.098 12:44:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.036 12:44:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.036 12:44:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:35.036 12:44:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:35.295 true 00:09:35.295 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:35.295 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.555 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.814 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:35.814 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:35.814 true 00:09:35.814 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:35.814 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.072 12:44:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.330 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:36.330 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:36.330 true 00:09:36.330 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:36.330 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.597 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.882 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:36.882 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:36.882 true 00:09:36.882 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:36.882 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.141 12:44:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.400 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:37.400 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:37.400 true 00:09:37.400 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:37.401 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.659 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.918 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:37.918 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:37.918 true 00:09:37.918 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:37.918 12:44:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.297 12:44:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.297 12:44:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:39.297 12:44:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:39.555 true 00:09:39.555 12:44:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:39.555 12:44:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.520 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.520 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:40.520 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:40.778 true 00:09:40.778 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:40.778 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.037 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.037 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:41.037 12:44:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:41.295 true 00:09:41.295 12:44:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:41.295 12:44:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.668 12:44:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.668 12:44:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:42.668 12:44:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:42.927 true 00:09:42.927 12:44:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:42.927 12:44:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.863 12:44:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.863 12:44:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:43.863 12:44:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:44.122 true 00:09:44.122 12:44:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:44.122 12:44:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.381 12:44:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.381 12:44:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:44.381 12:44:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:44.640 true 00:09:44.640 12:44:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:44.640 12:44:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.833 12:44:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.833 12:44:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:45.833 12:44:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:46.091 true 00:09:46.091 12:44:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:46.091 12:44:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.025 12:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.025 12:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:47.025 12:44:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:47.298 true 00:09:47.298 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:47.298 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.557 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.557 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:47.557 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:47.815 true 00:09:47.815 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:47.815 12:44:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.190 12:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.190 12:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:49.190 12:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:49.448 true 00:09:49.448 12:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:49.448 12:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.385 12:44:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.386 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:50.386 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:50.644 true 00:09:50.644 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:50.645 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.645 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.903 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:50.903 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:51.163 true 00:09:51.163 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:51.163 12:44:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.100 12:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.359 12:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:52.359 12:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:52.617 true 00:09:52.617 12:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:52.617 12:44:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.553 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.553 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:53.553 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:53.813 true 00:09:53.813 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:53.813 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.072 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.073 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:54.073 12:44:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:54.331 true 00:09:54.331 12:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:54.331 12:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.734 12:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.734 12:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:55.734 12:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:55.734 true 00:09:55.734 12:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:55.734 12:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.671 12:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.930 12:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:56.930 12:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:56.930 true 00:09:56.930 12:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:56.930 12:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.189 12:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.449 12:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:57.449 12:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:57.449 true 00:09:57.449 12:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:57.449 12:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.826 12:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.826 12:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:58.826 12:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:59.085 true 00:09:59.085 12:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:09:59.085 12:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.022 Initializing NVMe Controllers 00:10:00.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:00.022 Controller IO queue size 128, less than required. 00:10:00.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.022 Controller IO queue size 128, less than required. 00:10:00.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:00.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:00.022 Initialization complete. Launching workers. 00:10:00.022 ======================================================== 00:10:00.022 Latency(us) 00:10:00.022 Device Information : IOPS MiB/s Average min max 00:10:00.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2571.72 1.26 33948.20 2007.42 1033692.31 00:10:00.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16894.90 8.25 7577.46 1328.83 306201.62 00:10:00.022 ======================================================== 00:10:00.022 Total : 19466.63 9.51 11061.28 1328.83 1033692.31 00:10:00.022 00:10:00.022 12:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.022 12:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:00.022 12:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:00.281 true 00:10:00.281 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604421 00:10:00.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1604421) - No such process 00:10:00.281 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1604421 00:10:00.281 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.540 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.540 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:00.540 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:00.540 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:00.540 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.540 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:00.800 null0 00:10:00.800 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.800 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.800 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:01.059 null1 00:10:01.059 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.059 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.059 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:01.059 null2 00:10:01.059 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.059 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.059 12:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:01.318 null3 00:10:01.318 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.318 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.318 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:01.577 null4 00:10:01.577 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.577 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.577 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:01.577 null5 00:10:01.577 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.577 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.577 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:01.836 null6 00:10:01.836 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.836 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.836 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:02.095 null7 00:10:02.095 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.095 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.095 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1610013 1610014 1610017 1610018 1610020 1610022 1610024 1610025 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.096 12:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.357 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.358 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.617 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.875 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.134 12:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.134 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.135 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.135 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.135 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.135 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.393 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.651 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.652 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.909 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.909 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.909 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.909 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.909 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.909 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.910 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.169 12:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.428 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.428 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.428 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.428 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.429 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.688 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.947 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.206 12:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.206 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.465 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.723 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.981 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.981 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.981 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.981 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.982 rmmod nvme_tcp 00:10:05.982 rmmod nvme_fabrics 00:10:05.982 rmmod nvme_keyring 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1603927 ']' 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1603927 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1603927 ']' 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1603927 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603927 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603927' 00:10:05.982 killing process with pid 1603927 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1603927 00:10:05.982 12:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1603927 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.240 12:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.770 12:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.770 00:10:08.770 real 0m47.585s 00:10:08.770 user 3m11.791s 00:10:08.770 sys 0m14.917s 00:10:08.770 12:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.770 12:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.770 ************************************ 00:10:08.770 END TEST nvmf_ns_hotplug_stress 00:10:08.770 ************************************ 00:10:08.770 12:44:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:08.770 12:44:39 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:08.770 12:44:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.770 12:44:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.770 12:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.770 ************************************ 00:10:08.770 START TEST nvmf_connect_stress 00:10:08.770 ************************************ 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:08.770 * Looking for test storage... 00:10:08.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.770 12:44:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.109 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.109 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.109 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.110 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.110 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.110 12:44:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:10:14.110 00:10:14.110 --- 10.0.0.2 ping statistics --- 00:10:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.110 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:10:14.110 00:10:14.110 --- 10.0.0.1 ping statistics --- 00:10:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.110 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.110 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1614263 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1614263 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1614263 ']' 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.370 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 [2024-07-15 12:44:45.116182] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:14.370 [2024-07-15 12:44:45.116238] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.370 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.370 [2024-07-15 12:44:45.188386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.370 [2024-07-15 12:44:45.267862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.370 [2024-07-15 12:44:45.267895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.370 [2024-07-15 12:44:45.267902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.370 [2024-07-15 12:44:45.267908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.370 [2024-07-15 12:44:45.267913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.370 [2024-07-15 12:44:45.267963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.370 [2024-07-15 12:44:45.267993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.370 [2024-07-15 12:44:45.267994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 [2024-07-15 12:44:45.967976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 [2024-07-15 12:44:45.992146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.309 12:44:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.309 NULL1 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1614420 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.309 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.310 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.568 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.568 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:15.568 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.569 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.569 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.827 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.827 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:15.827 12:44:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.827 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.827 12:44:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.396 12:44:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:16.396 12:44:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.396 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.396 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.655 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.655 12:44:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:16.655 12:44:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.655 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.655 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.914 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.914 12:44:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:16.914 12:44:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.914 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.914 12:44:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.173 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.173 12:44:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:17.173 12:44:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.173 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.173 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.433 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.433 12:44:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:17.433 12:44:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.433 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.433 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.001 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.001 12:44:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:18.001 12:44:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.001 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.001 12:44:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.260 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.260 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:18.260 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.260 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.260 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.519 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.519 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:18.519 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.519 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.520 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.779 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.779 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:18.779 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.779 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.779 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.038 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.038 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:19.038 12:44:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.038 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.038 12:44:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.607 12:44:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:19.607 12:44:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.607 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.607 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.866 12:44:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:19.866 12:44:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.866 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.866 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.125 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.125 12:44:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:20.125 12:44:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.125 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.125 12:44:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.384 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.384 12:44:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:20.384 12:44:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.384 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.384 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.953 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.953 12:44:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:20.953 12:44:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.953 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.953 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.213 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.213 12:44:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:21.213 12:44:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.213 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.213 12:44:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.472 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.472 12:44:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:21.472 12:44:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.472 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.472 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.730 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.730 12:44:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:21.730 12:44:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.730 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.730 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.989 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.989 12:44:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:21.989 12:44:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.989 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.989 12:44:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.556 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.556 12:44:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:22.556 12:44:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.556 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.556 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.815 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.815 12:44:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:22.815 12:44:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.815 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.815 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.073 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.073 12:44:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:23.073 12:44:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.073 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.073 12:44:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.332 12:44:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:23.332 12:44:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.332 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.332 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.899 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.899 12:44:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:23.899 12:44:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.899 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.899 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.158 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.158 12:44:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:24.158 12:44:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.158 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.158 12:44:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.416 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.416 12:44:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:24.416 12:44:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.416 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.416 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.675 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.675 12:44:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:24.675 12:44:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.675 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.675 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.933 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.933 12:44:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:24.933 12:44:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.933 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.934 12:44:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1614420 00:10:25.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1614420) - No such process 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1614420 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.452 rmmod nvme_tcp 00:10:25.452 rmmod nvme_fabrics 00:10:25.452 rmmod nvme_keyring 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1614263 ']' 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1614263 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1614263 ']' 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1614263 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614263 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614263' 00:10:25.452 killing process with pid 1614263 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1614263 00:10:25.452 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1614263 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.711 12:44:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.616 12:44:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.616 00:10:27.616 real 0m19.364s 00:10:27.616 user 0m41.306s 00:10:27.616 sys 0m8.260s 00:10:27.616 12:44:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.616 12:44:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.616 ************************************ 00:10:27.616 END TEST nvmf_connect_stress 00:10:27.616 ************************************ 00:10:27.876 12:44:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:27.876 12:44:58 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:27.876 12:44:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.876 12:44:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.876 12:44:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.876 ************************************ 00:10:27.876 START TEST nvmf_fused_ordering 00:10:27.876 ************************************ 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:27.876 * Looking for test storage... 00:10:27.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.876 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.877 12:44:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:34.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:34.509 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.509 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:34.510 Found net devices under 0000:86:00.0: cvl_0_0 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:34.510 Found net devices under 0000:86:00.1: cvl_0_1 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:10:34.510 00:10:34.510 --- 10.0.0.2 ping statistics --- 00:10:34.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.510 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:10:34.510 00:10:34.510 --- 10.0.0.1 ping statistics --- 00:10:34.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.510 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1619731 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1619731 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1619731 ']' 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.510 12:45:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 [2024-07-15 12:45:04.561003] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:34.510 [2024-07-15 12:45:04.561052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.510 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.510 [2024-07-15 12:45:04.633759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.510 [2024-07-15 12:45:04.716568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.510 [2024-07-15 12:45:04.716602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.510 [2024-07-15 12:45:04.716610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.510 [2024-07-15 12:45:04.716616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.510 [2024-07-15 12:45:04.716621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.510 [2024-07-15 12:45:04.716645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 [2024-07-15 12:45:05.412706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 [2024-07-15 12:45:05.432831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 NULL1 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.510 12:45:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:34.771 [2024-07-15 12:45:05.482179] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:34.771 [2024-07-15 12:45:05.482215] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619936 ] 00:10:34.771 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.030 Attached to nqn.2016-06.io.spdk:cnode1 00:10:35.030 Namespace ID: 1 size: 1GB 00:10:35.030 fused_ordering(0) 00:10:35.030 fused_ordering(1) 00:10:35.030 fused_ordering(2) 00:10:35.030 fused_ordering(3) 00:10:35.030 fused_ordering(4) 00:10:35.030 fused_ordering(5) 00:10:35.030 fused_ordering(6) 00:10:35.030 fused_ordering(7) 00:10:35.030 fused_ordering(8) 00:10:35.030 fused_ordering(9) 00:10:35.030 fused_ordering(10) 00:10:35.030 fused_ordering(11) 00:10:35.030 fused_ordering(12) 00:10:35.030 fused_ordering(13) 00:10:35.030 fused_ordering(14) 00:10:35.030 fused_ordering(15) 00:10:35.030 fused_ordering(16) 00:10:35.030 fused_ordering(17) 00:10:35.030 fused_ordering(18) 00:10:35.030 fused_ordering(19) 00:10:35.030 fused_ordering(20) 00:10:35.030 fused_ordering(21) 00:10:35.030 fused_ordering(22) 00:10:35.030 fused_ordering(23) 00:10:35.030 fused_ordering(24) 00:10:35.030 fused_ordering(25) 00:10:35.030 fused_ordering(26) 00:10:35.030 fused_ordering(27) 00:10:35.030 fused_ordering(28) 00:10:35.030 fused_ordering(29) 00:10:35.030 fused_ordering(30) 00:10:35.030 fused_ordering(31) 00:10:35.030 fused_ordering(32) 00:10:35.030 fused_ordering(33) 00:10:35.030 fused_ordering(34) 00:10:35.030 fused_ordering(35) 00:10:35.030 fused_ordering(36) 00:10:35.030 fused_ordering(37) 00:10:35.030 fused_ordering(38) 00:10:35.030 fused_ordering(39) 00:10:35.030 fused_ordering(40) 00:10:35.030 fused_ordering(41) 00:10:35.030 fused_ordering(42) 00:10:35.030 fused_ordering(43) 00:10:35.030 fused_ordering(44) 00:10:35.030 fused_ordering(45) 00:10:35.030 fused_ordering(46) 00:10:35.030 fused_ordering(47) 00:10:35.030 fused_ordering(48) 00:10:35.030 fused_ordering(49) 00:10:35.030 fused_ordering(50) 00:10:35.030 fused_ordering(51) 00:10:35.030 fused_ordering(52) 00:10:35.030 fused_ordering(53) 00:10:35.030 fused_ordering(54) 00:10:35.030 fused_ordering(55) 00:10:35.030 fused_ordering(56) 00:10:35.030 fused_ordering(57) 00:10:35.030 fused_ordering(58) 00:10:35.030 fused_ordering(59) 00:10:35.030 fused_ordering(60) 00:10:35.030 fused_ordering(61) 00:10:35.030 fused_ordering(62) 00:10:35.030 fused_ordering(63) 00:10:35.030 fused_ordering(64) 00:10:35.030 fused_ordering(65) 00:10:35.030 fused_ordering(66) 00:10:35.030 fused_ordering(67) 00:10:35.030 fused_ordering(68) 00:10:35.030 fused_ordering(69) 00:10:35.030 fused_ordering(70) 00:10:35.030 fused_ordering(71) 00:10:35.030 fused_ordering(72) 00:10:35.030 fused_ordering(73) 00:10:35.030 fused_ordering(74) 00:10:35.030 fused_ordering(75) 00:10:35.030 fused_ordering(76) 00:10:35.030 fused_ordering(77) 00:10:35.030 fused_ordering(78) 00:10:35.030 fused_ordering(79) 00:10:35.030 fused_ordering(80) 00:10:35.030 fused_ordering(81) 00:10:35.030 fused_ordering(82) 00:10:35.030 fused_ordering(83) 00:10:35.030 fused_ordering(84) 00:10:35.030 fused_ordering(85) 00:10:35.030 fused_ordering(86) 00:10:35.030 fused_ordering(87) 00:10:35.030 fused_ordering(88) 00:10:35.030 fused_ordering(89) 00:10:35.030 fused_ordering(90) 00:10:35.030 fused_ordering(91) 00:10:35.030 fused_ordering(92) 00:10:35.030 fused_ordering(93) 00:10:35.030 fused_ordering(94) 00:10:35.030 fused_ordering(95) 00:10:35.030 fused_ordering(96) 00:10:35.030 fused_ordering(97) 00:10:35.030 fused_ordering(98) 00:10:35.030 fused_ordering(99) 00:10:35.030 fused_ordering(100) 00:10:35.030 fused_ordering(101) 00:10:35.030 fused_ordering(102) 00:10:35.030 fused_ordering(103) 00:10:35.030 fused_ordering(104) 00:10:35.030 fused_ordering(105) 00:10:35.030 fused_ordering(106) 00:10:35.030 fused_ordering(107) 00:10:35.030 fused_ordering(108) 00:10:35.030 fused_ordering(109) 00:10:35.030 fused_ordering(110) 00:10:35.030 fused_ordering(111) 00:10:35.030 fused_ordering(112) 00:10:35.030 fused_ordering(113) 00:10:35.030 fused_ordering(114) 00:10:35.030 fused_ordering(115) 00:10:35.030 fused_ordering(116) 00:10:35.030 fused_ordering(117) 00:10:35.030 fused_ordering(118) 00:10:35.030 fused_ordering(119) 00:10:35.030 fused_ordering(120) 00:10:35.030 fused_ordering(121) 00:10:35.030 fused_ordering(122) 00:10:35.030 fused_ordering(123) 00:10:35.030 fused_ordering(124) 00:10:35.030 fused_ordering(125) 00:10:35.030 fused_ordering(126) 00:10:35.030 fused_ordering(127) 00:10:35.030 fused_ordering(128) 00:10:35.030 fused_ordering(129) 00:10:35.030 fused_ordering(130) 00:10:35.030 fused_ordering(131) 00:10:35.031 fused_ordering(132) 00:10:35.031 fused_ordering(133) 00:10:35.031 fused_ordering(134) 00:10:35.031 fused_ordering(135) 00:10:35.031 fused_ordering(136) 00:10:35.031 fused_ordering(137) 00:10:35.031 fused_ordering(138) 00:10:35.031 fused_ordering(139) 00:10:35.031 fused_ordering(140) 00:10:35.031 fused_ordering(141) 00:10:35.031 fused_ordering(142) 00:10:35.031 fused_ordering(143) 00:10:35.031 fused_ordering(144) 00:10:35.031 fused_ordering(145) 00:10:35.031 fused_ordering(146) 00:10:35.031 fused_ordering(147) 00:10:35.031 fused_ordering(148) 00:10:35.031 fused_ordering(149) 00:10:35.031 fused_ordering(150) 00:10:35.031 fused_ordering(151) 00:10:35.031 fused_ordering(152) 00:10:35.031 fused_ordering(153) 00:10:35.031 fused_ordering(154) 00:10:35.031 fused_ordering(155) 00:10:35.031 fused_ordering(156) 00:10:35.031 fused_ordering(157) 00:10:35.031 fused_ordering(158) 00:10:35.031 fused_ordering(159) 00:10:35.031 fused_ordering(160) 00:10:35.031 fused_ordering(161) 00:10:35.031 fused_ordering(162) 00:10:35.031 fused_ordering(163) 00:10:35.031 fused_ordering(164) 00:10:35.031 fused_ordering(165) 00:10:35.031 fused_ordering(166) 00:10:35.031 fused_ordering(167) 00:10:35.031 fused_ordering(168) 00:10:35.031 fused_ordering(169) 00:10:35.031 fused_ordering(170) 00:10:35.031 fused_ordering(171) 00:10:35.031 fused_ordering(172) 00:10:35.031 fused_ordering(173) 00:10:35.031 fused_ordering(174) 00:10:35.031 fused_ordering(175) 00:10:35.031 fused_ordering(176) 00:10:35.031 fused_ordering(177) 00:10:35.031 fused_ordering(178) 00:10:35.031 fused_ordering(179) 00:10:35.031 fused_ordering(180) 00:10:35.031 fused_ordering(181) 00:10:35.031 fused_ordering(182) 00:10:35.031 fused_ordering(183) 00:10:35.031 fused_ordering(184) 00:10:35.031 fused_ordering(185) 00:10:35.031 fused_ordering(186) 00:10:35.031 fused_ordering(187) 00:10:35.031 fused_ordering(188) 00:10:35.031 fused_ordering(189) 00:10:35.031 fused_ordering(190) 00:10:35.031 fused_ordering(191) 00:10:35.031 fused_ordering(192) 00:10:35.031 fused_ordering(193) 00:10:35.031 fused_ordering(194) 00:10:35.031 fused_ordering(195) 00:10:35.031 fused_ordering(196) 00:10:35.031 fused_ordering(197) 00:10:35.031 fused_ordering(198) 00:10:35.031 fused_ordering(199) 00:10:35.031 fused_ordering(200) 00:10:35.031 fused_ordering(201) 00:10:35.031 fused_ordering(202) 00:10:35.031 fused_ordering(203) 00:10:35.031 fused_ordering(204) 00:10:35.031 fused_ordering(205) 00:10:35.290 fused_ordering(206) 00:10:35.290 fused_ordering(207) 00:10:35.290 fused_ordering(208) 00:10:35.290 fused_ordering(209) 00:10:35.290 fused_ordering(210) 00:10:35.290 fused_ordering(211) 00:10:35.290 fused_ordering(212) 00:10:35.290 fused_ordering(213) 00:10:35.290 fused_ordering(214) 00:10:35.290 fused_ordering(215) 00:10:35.290 fused_ordering(216) 00:10:35.290 fused_ordering(217) 00:10:35.290 fused_ordering(218) 00:10:35.290 fused_ordering(219) 00:10:35.290 fused_ordering(220) 00:10:35.290 fused_ordering(221) 00:10:35.290 fused_ordering(222) 00:10:35.290 fused_ordering(223) 00:10:35.290 fused_ordering(224) 00:10:35.290 fused_ordering(225) 00:10:35.290 fused_ordering(226) 00:10:35.290 fused_ordering(227) 00:10:35.290 fused_ordering(228) 00:10:35.290 fused_ordering(229) 00:10:35.290 fused_ordering(230) 00:10:35.290 fused_ordering(231) 00:10:35.290 fused_ordering(232) 00:10:35.290 fused_ordering(233) 00:10:35.290 fused_ordering(234) 00:10:35.290 fused_ordering(235) 00:10:35.290 fused_ordering(236) 00:10:35.290 fused_ordering(237) 00:10:35.290 fused_ordering(238) 00:10:35.290 fused_ordering(239) 00:10:35.290 fused_ordering(240) 00:10:35.290 fused_ordering(241) 00:10:35.290 fused_ordering(242) 00:10:35.290 fused_ordering(243) 00:10:35.290 fused_ordering(244) 00:10:35.290 fused_ordering(245) 00:10:35.290 fused_ordering(246) 00:10:35.290 fused_ordering(247) 00:10:35.290 fused_ordering(248) 00:10:35.290 fused_ordering(249) 00:10:35.290 fused_ordering(250) 00:10:35.290 fused_ordering(251) 00:10:35.290 fused_ordering(252) 00:10:35.290 fused_ordering(253) 00:10:35.290 fused_ordering(254) 00:10:35.290 fused_ordering(255) 00:10:35.290 fused_ordering(256) 00:10:35.290 fused_ordering(257) 00:10:35.290 fused_ordering(258) 00:10:35.290 fused_ordering(259) 00:10:35.290 fused_ordering(260) 00:10:35.290 fused_ordering(261) 00:10:35.290 fused_ordering(262) 00:10:35.290 fused_ordering(263) 00:10:35.290 fused_ordering(264) 00:10:35.290 fused_ordering(265) 00:10:35.290 fused_ordering(266) 00:10:35.290 fused_ordering(267) 00:10:35.290 fused_ordering(268) 00:10:35.290 fused_ordering(269) 00:10:35.290 fused_ordering(270) 00:10:35.290 fused_ordering(271) 00:10:35.290 fused_ordering(272) 00:10:35.290 fused_ordering(273) 00:10:35.290 fused_ordering(274) 00:10:35.290 fused_ordering(275) 00:10:35.290 fused_ordering(276) 00:10:35.290 fused_ordering(277) 00:10:35.290 fused_ordering(278) 00:10:35.290 fused_ordering(279) 00:10:35.290 fused_ordering(280) 00:10:35.290 fused_ordering(281) 00:10:35.290 fused_ordering(282) 00:10:35.290 fused_ordering(283) 00:10:35.290 fused_ordering(284) 00:10:35.290 fused_ordering(285) 00:10:35.290 fused_ordering(286) 00:10:35.290 fused_ordering(287) 00:10:35.290 fused_ordering(288) 00:10:35.290 fused_ordering(289) 00:10:35.290 fused_ordering(290) 00:10:35.290 fused_ordering(291) 00:10:35.290 fused_ordering(292) 00:10:35.290 fused_ordering(293) 00:10:35.290 fused_ordering(294) 00:10:35.290 fused_ordering(295) 00:10:35.290 fused_ordering(296) 00:10:35.290 fused_ordering(297) 00:10:35.290 fused_ordering(298) 00:10:35.290 fused_ordering(299) 00:10:35.290 fused_ordering(300) 00:10:35.290 fused_ordering(301) 00:10:35.290 fused_ordering(302) 00:10:35.290 fused_ordering(303) 00:10:35.290 fused_ordering(304) 00:10:35.290 fused_ordering(305) 00:10:35.290 fused_ordering(306) 00:10:35.290 fused_ordering(307) 00:10:35.290 fused_ordering(308) 00:10:35.290 fused_ordering(309) 00:10:35.290 fused_ordering(310) 00:10:35.290 fused_ordering(311) 00:10:35.290 fused_ordering(312) 00:10:35.290 fused_ordering(313) 00:10:35.290 fused_ordering(314) 00:10:35.290 fused_ordering(315) 00:10:35.290 fused_ordering(316) 00:10:35.290 fused_ordering(317) 00:10:35.290 fused_ordering(318) 00:10:35.290 fused_ordering(319) 00:10:35.290 fused_ordering(320) 00:10:35.290 fused_ordering(321) 00:10:35.290 fused_ordering(322) 00:10:35.290 fused_ordering(323) 00:10:35.290 fused_ordering(324) 00:10:35.290 fused_ordering(325) 00:10:35.290 fused_ordering(326) 00:10:35.290 fused_ordering(327) 00:10:35.290 fused_ordering(328) 00:10:35.290 fused_ordering(329) 00:10:35.290 fused_ordering(330) 00:10:35.290 fused_ordering(331) 00:10:35.290 fused_ordering(332) 00:10:35.290 fused_ordering(333) 00:10:35.290 fused_ordering(334) 00:10:35.290 fused_ordering(335) 00:10:35.290 fused_ordering(336) 00:10:35.290 fused_ordering(337) 00:10:35.290 fused_ordering(338) 00:10:35.290 fused_ordering(339) 00:10:35.290 fused_ordering(340) 00:10:35.290 fused_ordering(341) 00:10:35.290 fused_ordering(342) 00:10:35.290 fused_ordering(343) 00:10:35.290 fused_ordering(344) 00:10:35.290 fused_ordering(345) 00:10:35.290 fused_ordering(346) 00:10:35.290 fused_ordering(347) 00:10:35.290 fused_ordering(348) 00:10:35.290 fused_ordering(349) 00:10:35.290 fused_ordering(350) 00:10:35.290 fused_ordering(351) 00:10:35.290 fused_ordering(352) 00:10:35.290 fused_ordering(353) 00:10:35.290 fused_ordering(354) 00:10:35.290 fused_ordering(355) 00:10:35.290 fused_ordering(356) 00:10:35.290 fused_ordering(357) 00:10:35.290 fused_ordering(358) 00:10:35.290 fused_ordering(359) 00:10:35.290 fused_ordering(360) 00:10:35.290 fused_ordering(361) 00:10:35.290 fused_ordering(362) 00:10:35.290 fused_ordering(363) 00:10:35.290 fused_ordering(364) 00:10:35.290 fused_ordering(365) 00:10:35.290 fused_ordering(366) 00:10:35.290 fused_ordering(367) 00:10:35.290 fused_ordering(368) 00:10:35.290 fused_ordering(369) 00:10:35.290 fused_ordering(370) 00:10:35.290 fused_ordering(371) 00:10:35.290 fused_ordering(372) 00:10:35.290 fused_ordering(373) 00:10:35.290 fused_ordering(374) 00:10:35.290 fused_ordering(375) 00:10:35.290 fused_ordering(376) 00:10:35.290 fused_ordering(377) 00:10:35.290 fused_ordering(378) 00:10:35.290 fused_ordering(379) 00:10:35.290 fused_ordering(380) 00:10:35.290 fused_ordering(381) 00:10:35.290 fused_ordering(382) 00:10:35.290 fused_ordering(383) 00:10:35.290 fused_ordering(384) 00:10:35.290 fused_ordering(385) 00:10:35.290 fused_ordering(386) 00:10:35.290 fused_ordering(387) 00:10:35.290 fused_ordering(388) 00:10:35.290 fused_ordering(389) 00:10:35.290 fused_ordering(390) 00:10:35.290 fused_ordering(391) 00:10:35.290 fused_ordering(392) 00:10:35.290 fused_ordering(393) 00:10:35.290 fused_ordering(394) 00:10:35.290 fused_ordering(395) 00:10:35.290 fused_ordering(396) 00:10:35.290 fused_ordering(397) 00:10:35.290 fused_ordering(398) 00:10:35.290 fused_ordering(399) 00:10:35.290 fused_ordering(400) 00:10:35.290 fused_ordering(401) 00:10:35.290 fused_ordering(402) 00:10:35.290 fused_ordering(403) 00:10:35.290 fused_ordering(404) 00:10:35.290 fused_ordering(405) 00:10:35.290 fused_ordering(406) 00:10:35.290 fused_ordering(407) 00:10:35.290 fused_ordering(408) 00:10:35.290 fused_ordering(409) 00:10:35.290 fused_ordering(410) 00:10:35.550 fused_ordering(411) 00:10:35.550 fused_ordering(412) 00:10:35.550 fused_ordering(413) 00:10:35.550 fused_ordering(414) 00:10:35.550 fused_ordering(415) 00:10:35.550 fused_ordering(416) 00:10:35.550 fused_ordering(417) 00:10:35.550 fused_ordering(418) 00:10:35.550 fused_ordering(419) 00:10:35.550 fused_ordering(420) 00:10:35.550 fused_ordering(421) 00:10:35.550 fused_ordering(422) 00:10:35.550 fused_ordering(423) 00:10:35.550 fused_ordering(424) 00:10:35.550 fused_ordering(425) 00:10:35.550 fused_ordering(426) 00:10:35.550 fused_ordering(427) 00:10:35.550 fused_ordering(428) 00:10:35.550 fused_ordering(429) 00:10:35.550 fused_ordering(430) 00:10:35.550 fused_ordering(431) 00:10:35.550 fused_ordering(432) 00:10:35.550 fused_ordering(433) 00:10:35.550 fused_ordering(434) 00:10:35.550 fused_ordering(435) 00:10:35.550 fused_ordering(436) 00:10:35.550 fused_ordering(437) 00:10:35.550 fused_ordering(438) 00:10:35.550 fused_ordering(439) 00:10:35.550 fused_ordering(440) 00:10:35.550 fused_ordering(441) 00:10:35.550 fused_ordering(442) 00:10:35.550 fused_ordering(443) 00:10:35.550 fused_ordering(444) 00:10:35.550 fused_ordering(445) 00:10:35.550 fused_ordering(446) 00:10:35.550 fused_ordering(447) 00:10:35.550 fused_ordering(448) 00:10:35.550 fused_ordering(449) 00:10:35.550 fused_ordering(450) 00:10:35.550 fused_ordering(451) 00:10:35.550 fused_ordering(452) 00:10:35.550 fused_ordering(453) 00:10:35.550 fused_ordering(454) 00:10:35.550 fused_ordering(455) 00:10:35.550 fused_ordering(456) 00:10:35.550 fused_ordering(457) 00:10:35.550 fused_ordering(458) 00:10:35.550 fused_ordering(459) 00:10:35.550 fused_ordering(460) 00:10:35.550 fused_ordering(461) 00:10:35.550 fused_ordering(462) 00:10:35.550 fused_ordering(463) 00:10:35.550 fused_ordering(464) 00:10:35.550 fused_ordering(465) 00:10:35.550 fused_ordering(466) 00:10:35.550 fused_ordering(467) 00:10:35.550 fused_ordering(468) 00:10:35.550 fused_ordering(469) 00:10:35.550 fused_ordering(470) 00:10:35.550 fused_ordering(471) 00:10:35.550 fused_ordering(472) 00:10:35.550 fused_ordering(473) 00:10:35.550 fused_ordering(474) 00:10:35.550 fused_ordering(475) 00:10:35.550 fused_ordering(476) 00:10:35.550 fused_ordering(477) 00:10:35.550 fused_ordering(478) 00:10:35.550 fused_ordering(479) 00:10:35.550 fused_ordering(480) 00:10:35.550 fused_ordering(481) 00:10:35.550 fused_ordering(482) 00:10:35.550 fused_ordering(483) 00:10:35.550 fused_ordering(484) 00:10:35.550 fused_ordering(485) 00:10:35.550 fused_ordering(486) 00:10:35.550 fused_ordering(487) 00:10:35.550 fused_ordering(488) 00:10:35.550 fused_ordering(489) 00:10:35.550 fused_ordering(490) 00:10:35.550 fused_ordering(491) 00:10:35.550 fused_ordering(492) 00:10:35.550 fused_ordering(493) 00:10:35.550 fused_ordering(494) 00:10:35.550 fused_ordering(495) 00:10:35.550 fused_ordering(496) 00:10:35.550 fused_ordering(497) 00:10:35.550 fused_ordering(498) 00:10:35.550 fused_ordering(499) 00:10:35.550 fused_ordering(500) 00:10:35.550 fused_ordering(501) 00:10:35.550 fused_ordering(502) 00:10:35.550 fused_ordering(503) 00:10:35.550 fused_ordering(504) 00:10:35.550 fused_ordering(505) 00:10:35.550 fused_ordering(506) 00:10:35.550 fused_ordering(507) 00:10:35.550 fused_ordering(508) 00:10:35.550 fused_ordering(509) 00:10:35.550 fused_ordering(510) 00:10:35.550 fused_ordering(511) 00:10:35.550 fused_ordering(512) 00:10:35.550 fused_ordering(513) 00:10:35.550 fused_ordering(514) 00:10:35.550 fused_ordering(515) 00:10:35.550 fused_ordering(516) 00:10:35.550 fused_ordering(517) 00:10:35.550 fused_ordering(518) 00:10:35.550 fused_ordering(519) 00:10:35.550 fused_ordering(520) 00:10:35.550 fused_ordering(521) 00:10:35.550 fused_ordering(522) 00:10:35.550 fused_ordering(523) 00:10:35.550 fused_ordering(524) 00:10:35.550 fused_ordering(525) 00:10:35.550 fused_ordering(526) 00:10:35.550 fused_ordering(527) 00:10:35.550 fused_ordering(528) 00:10:35.550 fused_ordering(529) 00:10:35.550 fused_ordering(530) 00:10:35.550 fused_ordering(531) 00:10:35.550 fused_ordering(532) 00:10:35.550 fused_ordering(533) 00:10:35.550 fused_ordering(534) 00:10:35.550 fused_ordering(535) 00:10:35.550 fused_ordering(536) 00:10:35.550 fused_ordering(537) 00:10:35.550 fused_ordering(538) 00:10:35.550 fused_ordering(539) 00:10:35.550 fused_ordering(540) 00:10:35.550 fused_ordering(541) 00:10:35.550 fused_ordering(542) 00:10:35.550 fused_ordering(543) 00:10:35.550 fused_ordering(544) 00:10:35.550 fused_ordering(545) 00:10:35.550 fused_ordering(546) 00:10:35.550 fused_ordering(547) 00:10:35.550 fused_ordering(548) 00:10:35.550 fused_ordering(549) 00:10:35.550 fused_ordering(550) 00:10:35.550 fused_ordering(551) 00:10:35.550 fused_ordering(552) 00:10:35.550 fused_ordering(553) 00:10:35.550 fused_ordering(554) 00:10:35.550 fused_ordering(555) 00:10:35.550 fused_ordering(556) 00:10:35.550 fused_ordering(557) 00:10:35.550 fused_ordering(558) 00:10:35.550 fused_ordering(559) 00:10:35.550 fused_ordering(560) 00:10:35.550 fused_ordering(561) 00:10:35.550 fused_ordering(562) 00:10:35.550 fused_ordering(563) 00:10:35.550 fused_ordering(564) 00:10:35.550 fused_ordering(565) 00:10:35.550 fused_ordering(566) 00:10:35.550 fused_ordering(567) 00:10:35.550 fused_ordering(568) 00:10:35.550 fused_ordering(569) 00:10:35.550 fused_ordering(570) 00:10:35.550 fused_ordering(571) 00:10:35.550 fused_ordering(572) 00:10:35.550 fused_ordering(573) 00:10:35.550 fused_ordering(574) 00:10:35.550 fused_ordering(575) 00:10:35.550 fused_ordering(576) 00:10:35.550 fused_ordering(577) 00:10:35.550 fused_ordering(578) 00:10:35.550 fused_ordering(579) 00:10:35.550 fused_ordering(580) 00:10:35.550 fused_ordering(581) 00:10:35.550 fused_ordering(582) 00:10:35.550 fused_ordering(583) 00:10:35.550 fused_ordering(584) 00:10:35.550 fused_ordering(585) 00:10:35.550 fused_ordering(586) 00:10:35.550 fused_ordering(587) 00:10:35.550 fused_ordering(588) 00:10:35.550 fused_ordering(589) 00:10:35.550 fused_ordering(590) 00:10:35.550 fused_ordering(591) 00:10:35.550 fused_ordering(592) 00:10:35.550 fused_ordering(593) 00:10:35.550 fused_ordering(594) 00:10:35.550 fused_ordering(595) 00:10:35.550 fused_ordering(596) 00:10:35.550 fused_ordering(597) 00:10:35.550 fused_ordering(598) 00:10:35.550 fused_ordering(599) 00:10:35.550 fused_ordering(600) 00:10:35.550 fused_ordering(601) 00:10:35.550 fused_ordering(602) 00:10:35.550 fused_ordering(603) 00:10:35.550 fused_ordering(604) 00:10:35.550 fused_ordering(605) 00:10:35.550 fused_ordering(606) 00:10:35.550 fused_ordering(607) 00:10:35.550 fused_ordering(608) 00:10:35.550 fused_ordering(609) 00:10:35.550 fused_ordering(610) 00:10:35.550 fused_ordering(611) 00:10:35.550 fused_ordering(612) 00:10:35.550 fused_ordering(613) 00:10:35.550 fused_ordering(614) 00:10:35.550 fused_ordering(615) 00:10:36.120 fused_ordering(616) 00:10:36.120 fused_ordering(617) 00:10:36.120 fused_ordering(618) 00:10:36.120 fused_ordering(619) 00:10:36.120 fused_ordering(620) 00:10:36.120 fused_ordering(621) 00:10:36.120 fused_ordering(622) 00:10:36.120 fused_ordering(623) 00:10:36.120 fused_ordering(624) 00:10:36.120 fused_ordering(625) 00:10:36.120 fused_ordering(626) 00:10:36.120 fused_ordering(627) 00:10:36.120 fused_ordering(628) 00:10:36.120 fused_ordering(629) 00:10:36.120 fused_ordering(630) 00:10:36.120 fused_ordering(631) 00:10:36.120 fused_ordering(632) 00:10:36.120 fused_ordering(633) 00:10:36.120 fused_ordering(634) 00:10:36.120 fused_ordering(635) 00:10:36.120 fused_ordering(636) 00:10:36.120 fused_ordering(637) 00:10:36.120 fused_ordering(638) 00:10:36.120 fused_ordering(639) 00:10:36.120 fused_ordering(640) 00:10:36.120 fused_ordering(641) 00:10:36.120 fused_ordering(642) 00:10:36.120 fused_ordering(643) 00:10:36.120 fused_ordering(644) 00:10:36.120 fused_ordering(645) 00:10:36.120 fused_ordering(646) 00:10:36.120 fused_ordering(647) 00:10:36.120 fused_ordering(648) 00:10:36.120 fused_ordering(649) 00:10:36.120 fused_ordering(650) 00:10:36.120 fused_ordering(651) 00:10:36.120 fused_ordering(652) 00:10:36.120 fused_ordering(653) 00:10:36.120 fused_ordering(654) 00:10:36.120 fused_ordering(655) 00:10:36.120 fused_ordering(656) 00:10:36.120 fused_ordering(657) 00:10:36.120 fused_ordering(658) 00:10:36.120 fused_ordering(659) 00:10:36.120 fused_ordering(660) 00:10:36.120 fused_ordering(661) 00:10:36.120 fused_ordering(662) 00:10:36.120 fused_ordering(663) 00:10:36.120 fused_ordering(664) 00:10:36.120 fused_ordering(665) 00:10:36.120 fused_ordering(666) 00:10:36.120 fused_ordering(667) 00:10:36.120 fused_ordering(668) 00:10:36.120 fused_ordering(669) 00:10:36.120 fused_ordering(670) 00:10:36.120 fused_ordering(671) 00:10:36.120 fused_ordering(672) 00:10:36.120 fused_ordering(673) 00:10:36.120 fused_ordering(674) 00:10:36.120 fused_ordering(675) 00:10:36.120 fused_ordering(676) 00:10:36.120 fused_ordering(677) 00:10:36.120 fused_ordering(678) 00:10:36.120 fused_ordering(679) 00:10:36.120 fused_ordering(680) 00:10:36.120 fused_ordering(681) 00:10:36.120 fused_ordering(682) 00:10:36.120 fused_ordering(683) 00:10:36.120 fused_ordering(684) 00:10:36.120 fused_ordering(685) 00:10:36.120 fused_ordering(686) 00:10:36.120 fused_ordering(687) 00:10:36.120 fused_ordering(688) 00:10:36.120 fused_ordering(689) 00:10:36.120 fused_ordering(690) 00:10:36.120 fused_ordering(691) 00:10:36.120 fused_ordering(692) 00:10:36.120 fused_ordering(693) 00:10:36.120 fused_ordering(694) 00:10:36.120 fused_ordering(695) 00:10:36.120 fused_ordering(696) 00:10:36.120 fused_ordering(697) 00:10:36.120 fused_ordering(698) 00:10:36.120 fused_ordering(699) 00:10:36.120 fused_ordering(700) 00:10:36.120 fused_ordering(701) 00:10:36.120 fused_ordering(702) 00:10:36.120 fused_ordering(703) 00:10:36.120 fused_ordering(704) 00:10:36.120 fused_ordering(705) 00:10:36.120 fused_ordering(706) 00:10:36.120 fused_ordering(707) 00:10:36.120 fused_ordering(708) 00:10:36.120 fused_ordering(709) 00:10:36.120 fused_ordering(710) 00:10:36.120 fused_ordering(711) 00:10:36.120 fused_ordering(712) 00:10:36.120 fused_ordering(713) 00:10:36.120 fused_ordering(714) 00:10:36.120 fused_ordering(715) 00:10:36.120 fused_ordering(716) 00:10:36.120 fused_ordering(717) 00:10:36.120 fused_ordering(718) 00:10:36.120 fused_ordering(719) 00:10:36.120 fused_ordering(720) 00:10:36.120 fused_ordering(721) 00:10:36.120 fused_ordering(722) 00:10:36.120 fused_ordering(723) 00:10:36.120 fused_ordering(724) 00:10:36.120 fused_ordering(725) 00:10:36.120 fused_ordering(726) 00:10:36.120 fused_ordering(727) 00:10:36.120 fused_ordering(728) 00:10:36.120 fused_ordering(729) 00:10:36.120 fused_ordering(730) 00:10:36.120 fused_ordering(731) 00:10:36.120 fused_ordering(732) 00:10:36.120 fused_ordering(733) 00:10:36.120 fused_ordering(734) 00:10:36.120 fused_ordering(735) 00:10:36.120 fused_ordering(736) 00:10:36.120 fused_ordering(737) 00:10:36.120 fused_ordering(738) 00:10:36.120 fused_ordering(739) 00:10:36.120 fused_ordering(740) 00:10:36.120 fused_ordering(741) 00:10:36.120 fused_ordering(742) 00:10:36.120 fused_ordering(743) 00:10:36.120 fused_ordering(744) 00:10:36.120 fused_ordering(745) 00:10:36.120 fused_ordering(746) 00:10:36.120 fused_ordering(747) 00:10:36.120 fused_ordering(748) 00:10:36.120 fused_ordering(749) 00:10:36.120 fused_ordering(750) 00:10:36.120 fused_ordering(751) 00:10:36.120 fused_ordering(752) 00:10:36.120 fused_ordering(753) 00:10:36.120 fused_ordering(754) 00:10:36.120 fused_ordering(755) 00:10:36.120 fused_ordering(756) 00:10:36.120 fused_ordering(757) 00:10:36.120 fused_ordering(758) 00:10:36.120 fused_ordering(759) 00:10:36.120 fused_ordering(760) 00:10:36.120 fused_ordering(761) 00:10:36.120 fused_ordering(762) 00:10:36.120 fused_ordering(763) 00:10:36.120 fused_ordering(764) 00:10:36.120 fused_ordering(765) 00:10:36.120 fused_ordering(766) 00:10:36.120 fused_ordering(767) 00:10:36.120 fused_ordering(768) 00:10:36.120 fused_ordering(769) 00:10:36.120 fused_ordering(770) 00:10:36.120 fused_ordering(771) 00:10:36.120 fused_ordering(772) 00:10:36.120 fused_ordering(773) 00:10:36.120 fused_ordering(774) 00:10:36.120 fused_ordering(775) 00:10:36.120 fused_ordering(776) 00:10:36.120 fused_ordering(777) 00:10:36.120 fused_ordering(778) 00:10:36.120 fused_ordering(779) 00:10:36.120 fused_ordering(780) 00:10:36.120 fused_ordering(781) 00:10:36.120 fused_ordering(782) 00:10:36.120 fused_ordering(783) 00:10:36.120 fused_ordering(784) 00:10:36.120 fused_ordering(785) 00:10:36.120 fused_ordering(786) 00:10:36.120 fused_ordering(787) 00:10:36.120 fused_ordering(788) 00:10:36.120 fused_ordering(789) 00:10:36.120 fused_ordering(790) 00:10:36.120 fused_ordering(791) 00:10:36.120 fused_ordering(792) 00:10:36.120 fused_ordering(793) 00:10:36.120 fused_ordering(794) 00:10:36.120 fused_ordering(795) 00:10:36.120 fused_ordering(796) 00:10:36.120 fused_ordering(797) 00:10:36.120 fused_ordering(798) 00:10:36.120 fused_ordering(799) 00:10:36.120 fused_ordering(800) 00:10:36.120 fused_ordering(801) 00:10:36.120 fused_ordering(802) 00:10:36.120 fused_ordering(803) 00:10:36.120 fused_ordering(804) 00:10:36.120 fused_ordering(805) 00:10:36.120 fused_ordering(806) 00:10:36.120 fused_ordering(807) 00:10:36.120 fused_ordering(808) 00:10:36.120 fused_ordering(809) 00:10:36.120 fused_ordering(810) 00:10:36.120 fused_ordering(811) 00:10:36.120 fused_ordering(812) 00:10:36.120 fused_ordering(813) 00:10:36.120 fused_ordering(814) 00:10:36.120 fused_ordering(815) 00:10:36.120 fused_ordering(816) 00:10:36.120 fused_ordering(817) 00:10:36.120 fused_ordering(818) 00:10:36.120 fused_ordering(819) 00:10:36.120 fused_ordering(820) 00:10:36.689 fused_ordering(821) 00:10:36.689 fused_ordering(822) 00:10:36.689 fused_ordering(823) 00:10:36.689 fused_ordering(824) 00:10:36.689 fused_ordering(825) 00:10:36.689 fused_ordering(826) 00:10:36.689 fused_ordering(827) 00:10:36.689 fused_ordering(828) 00:10:36.689 fused_ordering(829) 00:10:36.689 fused_ordering(830) 00:10:36.689 fused_ordering(831) 00:10:36.689 fused_ordering(832) 00:10:36.689 fused_ordering(833) 00:10:36.689 fused_ordering(834) 00:10:36.689 fused_ordering(835) 00:10:36.689 fused_ordering(836) 00:10:36.689 fused_ordering(837) 00:10:36.689 fused_ordering(838) 00:10:36.689 fused_ordering(839) 00:10:36.689 fused_ordering(840) 00:10:36.689 fused_ordering(841) 00:10:36.689 fused_ordering(842) 00:10:36.689 fused_ordering(843) 00:10:36.689 fused_ordering(844) 00:10:36.689 fused_ordering(845) 00:10:36.689 fused_ordering(846) 00:10:36.689 fused_ordering(847) 00:10:36.689 fused_ordering(848) 00:10:36.689 fused_ordering(849) 00:10:36.689 fused_ordering(850) 00:10:36.689 fused_ordering(851) 00:10:36.689 fused_ordering(852) 00:10:36.689 fused_ordering(853) 00:10:36.689 fused_ordering(854) 00:10:36.689 fused_ordering(855) 00:10:36.689 fused_ordering(856) 00:10:36.689 fused_ordering(857) 00:10:36.689 fused_ordering(858) 00:10:36.689 fused_ordering(859) 00:10:36.689 fused_ordering(860) 00:10:36.689 fused_ordering(861) 00:10:36.689 fused_ordering(862) 00:10:36.689 fused_ordering(863) 00:10:36.689 fused_ordering(864) 00:10:36.689 fused_ordering(865) 00:10:36.689 fused_ordering(866) 00:10:36.689 fused_ordering(867) 00:10:36.689 fused_ordering(868) 00:10:36.689 fused_ordering(869) 00:10:36.689 fused_ordering(870) 00:10:36.689 fused_ordering(871) 00:10:36.689 fused_ordering(872) 00:10:36.689 fused_ordering(873) 00:10:36.689 fused_ordering(874) 00:10:36.689 fused_ordering(875) 00:10:36.689 fused_ordering(876) 00:10:36.689 fused_ordering(877) 00:10:36.689 fused_ordering(878) 00:10:36.689 fused_ordering(879) 00:10:36.689 fused_ordering(880) 00:10:36.689 fused_ordering(881) 00:10:36.689 fused_ordering(882) 00:10:36.689 fused_ordering(883) 00:10:36.689 fused_ordering(884) 00:10:36.689 fused_ordering(885) 00:10:36.689 fused_ordering(886) 00:10:36.689 fused_ordering(887) 00:10:36.689 fused_ordering(888) 00:10:36.689 fused_ordering(889) 00:10:36.689 fused_ordering(890) 00:10:36.689 fused_ordering(891) 00:10:36.689 fused_ordering(892) 00:10:36.689 fused_ordering(893) 00:10:36.689 fused_ordering(894) 00:10:36.689 fused_ordering(895) 00:10:36.689 fused_ordering(896) 00:10:36.689 fused_ordering(897) 00:10:36.689 fused_ordering(898) 00:10:36.689 fused_ordering(899) 00:10:36.689 fused_ordering(900) 00:10:36.689 fused_ordering(901) 00:10:36.689 fused_ordering(902) 00:10:36.689 fused_ordering(903) 00:10:36.689 fused_ordering(904) 00:10:36.689 fused_ordering(905) 00:10:36.689 fused_ordering(906) 00:10:36.689 fused_ordering(907) 00:10:36.689 fused_ordering(908) 00:10:36.689 fused_ordering(909) 00:10:36.689 fused_ordering(910) 00:10:36.689 fused_ordering(911) 00:10:36.689 fused_ordering(912) 00:10:36.689 fused_ordering(913) 00:10:36.689 fused_ordering(914) 00:10:36.689 fused_ordering(915) 00:10:36.689 fused_ordering(916) 00:10:36.689 fused_ordering(917) 00:10:36.689 fused_ordering(918) 00:10:36.689 fused_ordering(919) 00:10:36.689 fused_ordering(920) 00:10:36.689 fused_ordering(921) 00:10:36.689 fused_ordering(922) 00:10:36.689 fused_ordering(923) 00:10:36.689 fused_ordering(924) 00:10:36.689 fused_ordering(925) 00:10:36.689 fused_ordering(926) 00:10:36.689 fused_ordering(927) 00:10:36.689 fused_ordering(928) 00:10:36.689 fused_ordering(929) 00:10:36.689 fused_ordering(930) 00:10:36.689 fused_ordering(931) 00:10:36.689 fused_ordering(932) 00:10:36.689 fused_ordering(933) 00:10:36.689 fused_ordering(934) 00:10:36.689 fused_ordering(935) 00:10:36.689 fused_ordering(936) 00:10:36.689 fused_ordering(937) 00:10:36.689 fused_ordering(938) 00:10:36.689 fused_ordering(939) 00:10:36.689 fused_ordering(940) 00:10:36.689 fused_ordering(941) 00:10:36.689 fused_ordering(942) 00:10:36.689 fused_ordering(943) 00:10:36.689 fused_ordering(944) 00:10:36.689 fused_ordering(945) 00:10:36.689 fused_ordering(946) 00:10:36.689 fused_ordering(947) 00:10:36.689 fused_ordering(948) 00:10:36.689 fused_ordering(949) 00:10:36.689 fused_ordering(950) 00:10:36.689 fused_ordering(951) 00:10:36.689 fused_ordering(952) 00:10:36.689 fused_ordering(953) 00:10:36.689 fused_ordering(954) 00:10:36.689 fused_ordering(955) 00:10:36.689 fused_ordering(956) 00:10:36.689 fused_ordering(957) 00:10:36.689 fused_ordering(958) 00:10:36.689 fused_ordering(959) 00:10:36.689 fused_ordering(960) 00:10:36.689 fused_ordering(961) 00:10:36.689 fused_ordering(962) 00:10:36.689 fused_ordering(963) 00:10:36.689 fused_ordering(964) 00:10:36.689 fused_ordering(965) 00:10:36.689 fused_ordering(966) 00:10:36.689 fused_ordering(967) 00:10:36.689 fused_ordering(968) 00:10:36.689 fused_ordering(969) 00:10:36.689 fused_ordering(970) 00:10:36.689 fused_ordering(971) 00:10:36.689 fused_ordering(972) 00:10:36.689 fused_ordering(973) 00:10:36.689 fused_ordering(974) 00:10:36.689 fused_ordering(975) 00:10:36.689 fused_ordering(976) 00:10:36.689 fused_ordering(977) 00:10:36.689 fused_ordering(978) 00:10:36.689 fused_ordering(979) 00:10:36.689 fused_ordering(980) 00:10:36.689 fused_ordering(981) 00:10:36.689 fused_ordering(982) 00:10:36.689 fused_ordering(983) 00:10:36.689 fused_ordering(984) 00:10:36.689 fused_ordering(985) 00:10:36.689 fused_ordering(986) 00:10:36.689 fused_ordering(987) 00:10:36.689 fused_ordering(988) 00:10:36.689 fused_ordering(989) 00:10:36.689 fused_ordering(990) 00:10:36.689 fused_ordering(991) 00:10:36.689 fused_ordering(992) 00:10:36.689 fused_ordering(993) 00:10:36.689 fused_ordering(994) 00:10:36.689 fused_ordering(995) 00:10:36.689 fused_ordering(996) 00:10:36.689 fused_ordering(997) 00:10:36.689 fused_ordering(998) 00:10:36.689 fused_ordering(999) 00:10:36.689 fused_ordering(1000) 00:10:36.689 fused_ordering(1001) 00:10:36.689 fused_ordering(1002) 00:10:36.689 fused_ordering(1003) 00:10:36.689 fused_ordering(1004) 00:10:36.689 fused_ordering(1005) 00:10:36.689 fused_ordering(1006) 00:10:36.689 fused_ordering(1007) 00:10:36.689 fused_ordering(1008) 00:10:36.689 fused_ordering(1009) 00:10:36.689 fused_ordering(1010) 00:10:36.689 fused_ordering(1011) 00:10:36.689 fused_ordering(1012) 00:10:36.689 fused_ordering(1013) 00:10:36.689 fused_ordering(1014) 00:10:36.689 fused_ordering(1015) 00:10:36.689 fused_ordering(1016) 00:10:36.689 fused_ordering(1017) 00:10:36.689 fused_ordering(1018) 00:10:36.689 fused_ordering(1019) 00:10:36.689 fused_ordering(1020) 00:10:36.689 fused_ordering(1021) 00:10:36.689 fused_ordering(1022) 00:10:36.689 fused_ordering(1023) 00:10:36.689 12:45:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.690 rmmod nvme_tcp 00:10:36.690 rmmod nvme_fabrics 00:10:36.690 rmmod nvme_keyring 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1619731 ']' 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1619731 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1619731 ']' 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1619731 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1619731 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1619731' 00:10:36.690 killing process with pid 1619731 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1619731 00:10:36.690 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1619731 00:10:36.949 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.949 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.949 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.949 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.949 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.949 12:45:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.950 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.950 12:45:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.856 12:45:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.856 00:10:38.856 real 0m11.095s 00:10:38.856 user 0m5.626s 00:10:38.856 sys 0m5.815s 00:10:38.856 12:45:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.856 12:45:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.856 ************************************ 00:10:38.856 END TEST nvmf_fused_ordering 00:10:38.856 ************************************ 00:10:38.856 12:45:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:38.856 12:45:09 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:38.856 12:45:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.856 12:45:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.857 12:45:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.857 ************************************ 00:10:38.857 START TEST nvmf_delete_subsystem 00:10:38.857 ************************************ 00:10:38.857 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:39.115 * Looking for test storage... 00:10:39.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.115 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.116 12:45:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:45.681 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:45.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:45.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:45.682 Found net devices under 0000:86:00.0: cvl_0_0 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:45.682 Found net devices under 0000:86:00.1: cvl_0_1 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:45.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:45.682 00:10:45.682 --- 10.0.0.2 ping statistics --- 00:10:45.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.682 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:45.682 00:10:45.682 --- 10.0.0.1 ping statistics --- 00:10:45.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.682 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1624075 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1624075 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1624075 ']' 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.682 12:45:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.682 [2024-07-15 12:45:15.704190] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:45.682 [2024-07-15 12:45:15.704239] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.682 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.682 [2024-07-15 12:45:15.773414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.682 [2024-07-15 12:45:15.853729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.682 [2024-07-15 12:45:15.853764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.682 [2024-07-15 12:45:15.853771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.682 [2024-07-15 12:45:15.853777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.683 [2024-07-15 12:45:15.853782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.683 [2024-07-15 12:45:15.853839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.683 [2024-07-15 12:45:15.853840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 [2024-07-15 12:45:16.554247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 [2024-07-15 12:45:16.578401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 NULL1 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 Delay0 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1624320 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:45.683 12:45:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:45.941 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.941 [2024-07-15 12:45:16.666060] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:47.844 12:45:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.845 12:45:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.845 12:45:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 Write completed with error (sct=0, sc=8) 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.104 starting I/O failed: -6 00:10:48.104 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 [2024-07-15 12:45:18.875996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6280000c00 is same with the state(5) to be set 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Write completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 starting I/O failed: -6 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.105 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 starting I/O failed: -6 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 starting I/O failed: -6 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 starting I/O failed: -6 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 starting I/O failed: -6 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:48.106 Write completed with error (sct=0, sc=8) 00:10:48.106 Read completed with error (sct=0, sc=8) 00:10:49.042 [2024-07-15 12:45:19.845034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43ac0 is same with the state(5) to be set 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 [2024-07-15 12:45:19.877965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42000 is same with the state(5) to be set 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.042 Read completed with error (sct=0, sc=8) 00:10:49.042 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 [2024-07-15 12:45:19.878153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c425c0 is same with the state(5) to be set 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 [2024-07-15 12:45:19.878284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f628000cfe0 is same with the state(5) to be set 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Write completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 Read completed with error (sct=0, sc=8) 00:10:49.043 [2024-07-15 12:45:19.878522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f628000d760 is same with the state(5) to be set 00:10:49.043 Initializing NVMe Controllers 00:10:49.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:49.043 Controller IO queue size 128, less than required. 00:10:49.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:49.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:49.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:49.043 Initialization complete. Launching workers. 00:10:49.043 ======================================================== 00:10:49.043 Latency(us) 00:10:49.043 Device Information : IOPS MiB/s Average min max 00:10:49.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.74 0.09 890274.33 469.57 1010825.03 00:10:49.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.84 0.09 880964.96 340.96 1011211.00 00:10:49.043 ======================================================== 00:10:49.043 Total : 368.58 0.18 885807.84 340.96 1011211.00 00:10:49.043 00:10:49.043 [2024-07-15 12:45:19.879004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c43ac0 (9): Bad file descriptor 00:10:49.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:49.043 12:45:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.043 12:45:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:49.043 12:45:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1624320 00:10:49.043 12:45:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1624320 00:10:49.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1624320) - No such process 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1624320 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1624320 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1624320 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.610 [2024-07-15 12:45:20.411164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1625013 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:49.610 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.610 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.610 [2024-07-15 12:45:20.488246] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:50.178 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:50.178 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:50.178 12:45:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:50.743 12:45:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:50.743 12:45:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:50.743 12:45:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.001 12:45:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.001 12:45:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:51.001 12:45:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:51.588 12:45:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:51.588 12:45:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:51.588 12:45:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.174 12:45:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.174 12:45:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:52.174 12:45:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.745 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.745 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:52.745 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.745 Initializing NVMe Controllers 00:10:52.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.745 Controller IO queue size 128, less than required. 00:10:52.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:52.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:52.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:52.745 Initialization complete. Launching workers. 00:10:52.745 ======================================================== 00:10:52.745 Latency(us) 00:10:52.745 Device Information : IOPS MiB/s Average min max 00:10:52.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002343.87 1000160.51 1041073.15 00:10:52.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003628.35 1000151.45 1009849.20 00:10:52.745 ======================================================== 00:10:52.745 Total : 256.00 0.12 1002986.11 1000151.45 1041073.15 00:10:52.745 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1625013 00:10:53.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1625013) - No such process 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1625013 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.004 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:53.263 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.263 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:53.263 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.263 12:45:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.263 rmmod nvme_tcp 00:10:53.263 rmmod nvme_fabrics 00:10:53.263 rmmod nvme_keyring 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1624075 ']' 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1624075 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1624075 ']' 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1624075 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1624075 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1624075' 00:10:53.263 killing process with pid 1624075 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1624075 00:10:53.263 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1624075 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.521 12:45:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.423 12:45:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.423 00:10:55.423 real 0m16.538s 00:10:55.423 user 0m30.548s 00:10:55.423 sys 0m5.237s 00:10:55.423 12:45:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.423 12:45:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.423 ************************************ 00:10:55.423 END TEST nvmf_delete_subsystem 00:10:55.423 ************************************ 00:10:55.423 12:45:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:55.423 12:45:26 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:55.423 12:45:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.423 12:45:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.423 12:45:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.683 ************************************ 00:10:55.683 START TEST nvmf_ns_masking 00:10:55.683 ************************************ 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:55.683 * Looking for test storage... 00:10:55.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3c8e15ed-93e2-4449-9056-0bab740aebb3 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=92d2deb8-45d5-4fa3-ad55-2902f893f551 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:55.683 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d25454cb-5334-44f6-8bb8-ee549d9cd0ce 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.684 12:45:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:02.251 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:02.251 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:02.251 Found net devices under 0000:86:00.0: cvl_0_0 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.251 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:02.252 Found net devices under 0000:86:00.1: cvl_0_1 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.252 12:45:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:11:02.252 00:11:02.252 --- 10.0.0.2 ping statistics --- 00:11:02.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.252 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:11:02.252 00:11:02.252 --- 10.0.0.1 ping statistics --- 00:11:02.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.252 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1629009 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1629009 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1629009 ']' 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.252 12:45:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:02.252 [2024-07-15 12:45:32.307491] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:02.252 [2024-07-15 12:45:32.307534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.252 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.252 [2024-07-15 12:45:32.373967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.252 [2024-07-15 12:45:32.452721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.252 [2024-07-15 12:45:32.452754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.252 [2024-07-15 12:45:32.452765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.252 [2024-07-15 12:45:32.452771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.252 [2024-07-15 12:45:32.452776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.252 [2024-07-15 12:45:32.452792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.252 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:02.559 [2024-07-15 12:45:33.292838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.559 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:02.559 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:02.559 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:02.559 Malloc1 00:11:02.816 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:02.816 Malloc2 00:11:02.816 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.072 12:45:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:03.329 12:45:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.329 [2024-07-15 12:45:34.231484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.329 12:45:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:03.329 12:45:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d25454cb-5334-44f6-8bb8-ee549d9cd0ce -a 10.0.0.2 -s 4420 -i 4 00:11:03.588 12:45:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.588 12:45:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:03.588 12:45:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.588 12:45:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:03.588 12:45:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:06.125 [ 0]:0x1 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=120ce082b29940c6adba4771de3181a4 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 120ce082b29940c6adba4771de3181a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:06.125 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:06.125 [ 0]:0x1 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=120ce082b29940c6adba4771de3181a4 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 120ce082b29940c6adba4771de3181a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:06.126 [ 1]:0x2 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.126 12:45:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.385 12:45:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:06.385 12:45:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:06.385 12:45:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d25454cb-5334-44f6-8bb8-ee549d9cd0ce -a 10.0.0.2 -s 4420 -i 4 00:11:06.644 12:45:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:06.644 12:45:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.644 12:45:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.644 12:45:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:06.644 12:45:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:06.644 12:45:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.548 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.549 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.807 [ 0]:0x2 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.807 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:09.066 [ 0]:0x1 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=120ce082b29940c6adba4771de3181a4 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 120ce082b29940c6adba4771de3181a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:09.066 [ 1]:0x2 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.066 12:45:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:09.326 [ 0]:0x2 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.326 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.584 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:09.585 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d25454cb-5334-44f6-8bb8-ee549d9cd0ce -a 10.0.0.2 -s 4420 -i 4 00:11:09.844 12:45:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:09.844 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.844 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.844 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:09.844 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:09.844 12:45:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:11.748 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.008 [ 0]:0x1 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=120ce082b29940c6adba4771de3181a4 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 120ce082b29940c6adba4771de3181a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.008 [ 1]:0x2 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.008 12:45:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.278 [ 0]:0x2 00:11:12.278 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.279 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.536 [2024-07-15 12:45:43.401588] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:12.536 request: 00:11:12.536 { 00:11:12.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.536 "nsid": 2, 00:11:12.536 "host": "nqn.2016-06.io.spdk:host1", 00:11:12.536 "method": "nvmf_ns_remove_host", 00:11:12.536 "req_id": 1 00:11:12.536 } 00:11:12.536 Got JSON-RPC error response 00:11:12.536 response: 00:11:12.536 { 00:11:12.536 "code": -32602, 00:11:12.536 "message": "Invalid parameters" 00:11:12.536 } 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:12.536 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.794 [ 0]:0x2 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=635ba8e3416841109115fc3a95b99d4e 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 635ba8e3416841109115fc3a95b99d4e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1631028 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1631028 /var/tmp/host.sock 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1631028 ']' 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:12.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.794 12:45:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:12.794 [2024-07-15 12:45:43.640054] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:12.794 [2024-07-15 12:45:43.640099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631028 ] 00:11:12.794 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.794 [2024-07-15 12:45:43.707908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.053 [2024-07-15 12:45:43.781769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.621 12:45:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.621 12:45:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:13.621 12:45:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.880 12:45:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.880 12:45:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3c8e15ed-93e2-4449-9056-0bab740aebb3 00:11:13.880 12:45:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:13.880 12:45:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3C8E15ED93E2444990560BAB740AEBB3 -i 00:11:14.138 12:45:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 92d2deb8-45d5-4fa3-ad55-2902f893f551 00:11:14.138 12:45:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:14.138 12:45:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 92D2DEB845D54FA3AD552902F893F551 -i 00:11:14.397 12:45:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.656 12:45:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:14.656 12:45:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:14.656 12:45:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:15.224 nvme0n1 00:11:15.224 12:45:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:15.224 12:45:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:15.482 nvme1n2 00:11:15.482 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:15.482 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:15.482 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:15.482 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:15.482 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:15.740 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:15.740 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:15.740 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:15.740 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3c8e15ed-93e2-4449-9056-0bab740aebb3 == \3\c\8\e\1\5\e\d\-\9\3\e\2\-\4\4\4\9\-\9\0\5\6\-\0\b\a\b\7\4\0\a\e\b\b\3 ]] 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 92d2deb8-45d5-4fa3-ad55-2902f893f551 == \9\2\d\2\d\e\b\8\-\4\5\d\5\-\4\f\a\3\-\a\d\5\5\-\2\9\0\2\f\8\9\3\f\5\5\1 ]] 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1631028 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1631028 ']' 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1631028 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.999 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631028 00:11:16.257 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:16.257 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:16.257 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631028' 00:11:16.257 killing process with pid 1631028 00:11:16.257 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1631028 00:11:16.257 12:45:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1631028 00:11:16.516 12:45:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.516 12:45:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.774 rmmod nvme_tcp 00:11:16.774 rmmod nvme_fabrics 00:11:16.774 rmmod nvme_keyring 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1629009 ']' 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1629009 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1629009 ']' 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1629009 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629009 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629009' 00:11:16.774 killing process with pid 1629009 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1629009 00:11:16.774 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1629009 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.067 12:45:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.971 12:45:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.971 00:11:18.971 real 0m23.481s 00:11:18.971 user 0m25.362s 00:11:18.971 sys 0m6.513s 00:11:18.971 12:45:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.971 12:45:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 ************************************ 00:11:18.971 END TEST nvmf_ns_masking 00:11:18.971 ************************************ 00:11:18.971 12:45:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:18.971 12:45:49 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:18.971 12:45:49 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:18.971 12:45:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:18.971 12:45:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.971 12:45:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.230 ************************************ 00:11:19.230 START TEST nvmf_nvme_cli 00:11:19.230 ************************************ 00:11:19.230 12:45:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:19.230 * Looking for test storage... 00:11:19.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.231 12:45:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:25.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:25.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:25.814 Found net devices under 0000:86:00.0: cvl_0_0 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:25.814 Found net devices under 0000:86:00.1: cvl_0_1 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.814 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:11:25.815 00:11:25.815 --- 10.0.0.2 ping statistics --- 00:11:25.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.815 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:11:25.815 00:11:25.815 --- 10.0.0.1 ping statistics --- 00:11:25.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.815 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1635262 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1635262 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1635262 ']' 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.815 12:45:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 [2024-07-15 12:45:55.874991] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:25.815 [2024-07-15 12:45:55.875037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.815 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.815 [2024-07-15 12:45:55.946901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.815 [2024-07-15 12:45:56.028478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.815 [2024-07-15 12:45:56.028513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.815 [2024-07-15 12:45:56.028520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.815 [2024-07-15 12:45:56.028526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.815 [2024-07-15 12:45:56.028531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.815 [2024-07-15 12:45:56.028575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.815 [2024-07-15 12:45:56.028690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.815 [2024-07-15 12:45:56.028798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.815 [2024-07-15 12:45:56.028799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 [2024-07-15 12:45:56.740076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.815 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:25.815 Malloc0 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 Malloc1 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 [2024-07-15 12:45:56.821686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:26.073 00:11:26.073 Discovery Log Number of Records 2, Generation counter 2 00:11:26.073 =====Discovery Log Entry 0====== 00:11:26.073 trtype: tcp 00:11:26.073 adrfam: ipv4 00:11:26.073 subtype: current discovery subsystem 00:11:26.073 treq: not required 00:11:26.073 portid: 0 00:11:26.073 trsvcid: 4420 00:11:26.073 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.073 traddr: 10.0.0.2 00:11:26.073 eflags: explicit discovery connections, duplicate discovery information 00:11:26.073 sectype: none 00:11:26.073 =====Discovery Log Entry 1====== 00:11:26.073 trtype: tcp 00:11:26.073 adrfam: ipv4 00:11:26.073 subtype: nvme subsystem 00:11:26.073 treq: not required 00:11:26.073 portid: 0 00:11:26.073 trsvcid: 4420 00:11:26.073 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.073 traddr: 10.0.0.2 00:11:26.073 eflags: none 00:11:26.073 sectype: none 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:26.073 12:45:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.446 12:45:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:27.446 12:45:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:27.446 12:45:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.446 12:45:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:27.446 12:45:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:27.446 12:45:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:29.344 /dev/nvme0n1 ]] 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.344 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:29.602 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.860 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.860 rmmod nvme_tcp 00:11:29.860 rmmod nvme_fabrics 00:11:30.118 rmmod nvme_keyring 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1635262 ']' 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1635262 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1635262 ']' 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1635262 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1635262 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1635262' 00:11:30.118 killing process with pid 1635262 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1635262 00:11:30.118 12:46:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1635262 00:11:30.376 12:46:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.377 12:46:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.285 12:46:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.285 00:11:32.285 real 0m13.237s 00:11:32.285 user 0m21.895s 00:11:32.285 sys 0m4.974s 00:11:32.285 12:46:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.285 12:46:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:32.285 ************************************ 00:11:32.285 END TEST nvmf_nvme_cli 00:11:32.285 ************************************ 00:11:32.285 12:46:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:32.285 12:46:03 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:32.285 12:46:03 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:32.285 12:46:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.285 12:46:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.285 12:46:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.710 ************************************ 00:11:32.710 START TEST nvmf_vfio_user 00:11:32.710 ************************************ 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:32.710 * Looking for test storage... 00:11:32.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1636560 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1636560' 00:11:32.710 Process pid: 1636560 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:32.710 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1636560 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1636560 ']' 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.711 12:46:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:32.711 [2024-07-15 12:46:03.436460] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:32.711 [2024-07-15 12:46:03.436509] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.711 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.711 [2024-07-15 12:46:03.506953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.711 [2024-07-15 12:46:03.587645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.711 [2024-07-15 12:46:03.587681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.711 [2024-07-15 12:46:03.587688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.711 [2024-07-15 12:46:03.587696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.711 [2024-07-15 12:46:03.587701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.711 [2024-07-15 12:46:03.588093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.711 [2024-07-15 12:46:03.588129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.711 [2024-07-15 12:46:03.588259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.711 [2024-07-15 12:46:03.588259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.644 12:46:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.644 12:46:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:33.644 12:46:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:34.577 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:34.577 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:34.577 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:34.577 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:34.577 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:34.577 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:34.836 Malloc1 00:11:34.836 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:35.093 12:46:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:35.093 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:35.352 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.352 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:35.352 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:35.609 Malloc2 00:11:35.609 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:35.868 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:35.868 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:36.128 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:36.128 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:36.128 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.128 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:36.128 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:36.128 12:46:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:36.128 [2024-07-15 12:46:06.982827] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:36.128 [2024-07-15 12:46:06.982859] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637269 ] 00:11:36.128 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.128 [2024-07-15 12:46:07.011785] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:36.128 [2024-07-15 12:46:07.014843] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:36.128 [2024-07-15 12:46:07.014862] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fae25b72000 00:11:36.128 [2024-07-15 12:46:07.015842] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.016844] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.017847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.018859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.019864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.020869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.021876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.022872] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.128 [2024-07-15 12:46:07.023882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:36.128 [2024-07-15 12:46:07.023890] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fae25b67000 00:11:36.128 [2024-07-15 12:46:07.024833] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:36.128 [2024-07-15 12:46:07.037452] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:36.128 [2024-07-15 12:46:07.037479] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:36.128 [2024-07-15 12:46:07.039976] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:36.128 [2024-07-15 12:46:07.040013] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:36.128 [2024-07-15 12:46:07.040085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:36.128 [2024-07-15 12:46:07.040102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:36.128 [2024-07-15 12:46:07.040107] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:36.128 [2024-07-15 12:46:07.040971] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:36.128 [2024-07-15 12:46:07.040980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:36.128 [2024-07-15 12:46:07.040986] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:36.128 [2024-07-15 12:46:07.041976] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:36.128 [2024-07-15 12:46:07.041984] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:36.128 [2024-07-15 12:46:07.041995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:36.128 [2024-07-15 12:46:07.042979] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:36.128 [2024-07-15 12:46:07.042988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:36.128 [2024-07-15 12:46:07.043991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:36.128 [2024-07-15 12:46:07.043999] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:36.128 [2024-07-15 12:46:07.044003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:36.128 [2024-07-15 12:46:07.044009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:36.128 [2024-07-15 12:46:07.044114] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:36.128 [2024-07-15 12:46:07.044118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:36.128 [2024-07-15 12:46:07.044123] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:36.128 [2024-07-15 12:46:07.044997] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:36.128 [2024-07-15 12:46:07.045997] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:36.128 [2024-07-15 12:46:07.048235] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:36.128 [2024-07-15 12:46:07.049012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:36.128 [2024-07-15 12:46:07.049078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:36.128 [2024-07-15 12:46:07.050026] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:36.128 [2024-07-15 12:46:07.050033] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:36.128 [2024-07-15 12:46:07.050038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:36.128 [2024-07-15 12:46:07.050054] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:36.128 [2024-07-15 12:46:07.050061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:36.128 [2024-07-15 12:46:07.050074] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.128 [2024-07-15 12:46:07.050078] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.128 [2024-07-15 12:46:07.050090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.128 [2024-07-15 12:46:07.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:36.128 [2024-07-15 12:46:07.050135] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:36.128 [2024-07-15 12:46:07.050144] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:36.129 [2024-07-15 12:46:07.050148] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:36.129 [2024-07-15 12:46:07.050152] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:36.129 [2024-07-15 12:46:07.050156] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:36.129 [2024-07-15 12:46:07.050160] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:36.129 [2024-07-15 12:46:07.050164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.129 [2024-07-15 12:46:07.050211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.129 [2024-07-15 12:46:07.050218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.129 [2024-07-15 12:46:07.050230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.129 [2024-07-15 12:46:07.050235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050265] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:36.129 [2024-07-15 12:46:07.050269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050361] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:36.129 [2024-07-15 12:46:07.050365] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:36.129 [2024-07-15 12:46:07.050370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050393] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:36.129 [2024-07-15 12:46:07.050404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050416] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.129 [2024-07-15 12:46:07.050420] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.129 [2024-07-15 12:46:07.050426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050469] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.129 [2024-07-15 12:46:07.050472] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.129 [2024-07-15 12:46:07.050478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050526] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:36.129 [2024-07-15 12:46:07.050530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:36.129 [2024-07-15 12:46:07.050536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:36.129 [2024-07-15 12:46:07.050552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050627] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:36.129 [2024-07-15 12:46:07.050632] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:36.129 [2024-07-15 12:46:07.050635] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:36.129 [2024-07-15 12:46:07.050638] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:36.129 [2024-07-15 12:46:07.050644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:36.129 [2024-07-15 12:46:07.050650] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:36.129 [2024-07-15 12:46:07.050654] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:36.129 [2024-07-15 12:46:07.050659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050665] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:36.129 [2024-07-15 12:46:07.050669] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.129 [2024-07-15 12:46:07.050674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050680] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:36.129 [2024-07-15 12:46:07.050684] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:36.129 [2024-07-15 12:46:07.050689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:36.129 [2024-07-15 12:46:07.050696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:36.129 [2024-07-15 12:46:07.050722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:36.129 ===================================================== 00:11:36.129 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:36.129 ===================================================== 00:11:36.129 Controller Capabilities/Features 00:11:36.129 ================================ 00:11:36.129 Vendor ID: 4e58 00:11:36.129 Subsystem Vendor ID: 4e58 00:11:36.129 Serial Number: SPDK1 00:11:36.129 Model Number: SPDK bdev Controller 00:11:36.129 Firmware Version: 24.09 00:11:36.129 Recommended Arb Burst: 6 00:11:36.129 IEEE OUI Identifier: 8d 6b 50 00:11:36.129 Multi-path I/O 00:11:36.129 May have multiple subsystem ports: Yes 00:11:36.129 May have multiple controllers: Yes 00:11:36.129 Associated with SR-IOV VF: No 00:11:36.129 Max Data Transfer Size: 131072 00:11:36.129 Max Number of Namespaces: 32 00:11:36.129 Max Number of I/O Queues: 127 00:11:36.129 NVMe Specification Version (VS): 1.3 00:11:36.129 NVMe Specification Version (Identify): 1.3 00:11:36.129 Maximum Queue Entries: 256 00:11:36.129 Contiguous Queues Required: Yes 00:11:36.129 Arbitration Mechanisms Supported 00:11:36.129 Weighted Round Robin: Not Supported 00:11:36.129 Vendor Specific: Not Supported 00:11:36.129 Reset Timeout: 15000 ms 00:11:36.129 Doorbell Stride: 4 bytes 00:11:36.129 NVM Subsystem Reset: Not Supported 00:11:36.129 Command Sets Supported 00:11:36.129 NVM Command Set: Supported 00:11:36.129 Boot Partition: Not Supported 00:11:36.129 Memory Page Size Minimum: 4096 bytes 00:11:36.129 Memory Page Size Maximum: 4096 bytes 00:11:36.129 Persistent Memory Region: Not Supported 00:11:36.129 Optional Asynchronous Events Supported 00:11:36.129 Namespace Attribute Notices: Supported 00:11:36.129 Firmware Activation Notices: Not Supported 00:11:36.129 ANA Change Notices: Not Supported 00:11:36.129 PLE Aggregate Log Change Notices: Not Supported 00:11:36.129 LBA Status Info Alert Notices: Not Supported 00:11:36.129 EGE Aggregate Log Change Notices: Not Supported 00:11:36.129 Normal NVM Subsystem Shutdown event: Not Supported 00:11:36.129 Zone Descriptor Change Notices: Not Supported 00:11:36.129 Discovery Log Change Notices: Not Supported 00:11:36.129 Controller Attributes 00:11:36.129 128-bit Host Identifier: Supported 00:11:36.129 Non-Operational Permissive Mode: Not Supported 00:11:36.129 NVM Sets: Not Supported 00:11:36.129 Read Recovery Levels: Not Supported 00:11:36.129 Endurance Groups: Not Supported 00:11:36.129 Predictable Latency Mode: Not Supported 00:11:36.129 Traffic Based Keep ALive: Not Supported 00:11:36.129 Namespace Granularity: Not Supported 00:11:36.129 SQ Associations: Not Supported 00:11:36.129 UUID List: Not Supported 00:11:36.129 Multi-Domain Subsystem: Not Supported 00:11:36.129 Fixed Capacity Management: Not Supported 00:11:36.129 Variable Capacity Management: Not Supported 00:11:36.129 Delete Endurance Group: Not Supported 00:11:36.129 Delete NVM Set: Not Supported 00:11:36.129 Extended LBA Formats Supported: Not Supported 00:11:36.129 Flexible Data Placement Supported: Not Supported 00:11:36.130 00:11:36.130 Controller Memory Buffer Support 00:11:36.130 ================================ 00:11:36.130 Supported: No 00:11:36.130 00:11:36.130 Persistent Memory Region Support 00:11:36.130 ================================ 00:11:36.130 Supported: No 00:11:36.130 00:11:36.130 Admin Command Set Attributes 00:11:36.130 ============================ 00:11:36.130 Security Send/Receive: Not Supported 00:11:36.130 Format NVM: Not Supported 00:11:36.130 Firmware Activate/Download: Not Supported 00:11:36.130 Namespace Management: Not Supported 00:11:36.130 Device Self-Test: Not Supported 00:11:36.130 Directives: Not Supported 00:11:36.130 NVMe-MI: Not Supported 00:11:36.130 Virtualization Management: Not Supported 00:11:36.130 Doorbell Buffer Config: Not Supported 00:11:36.130 Get LBA Status Capability: Not Supported 00:11:36.130 Command & Feature Lockdown Capability: Not Supported 00:11:36.130 Abort Command Limit: 4 00:11:36.130 Async Event Request Limit: 4 00:11:36.130 Number of Firmware Slots: N/A 00:11:36.130 Firmware Slot 1 Read-Only: N/A 00:11:36.130 Firmware Activation Without Reset: N/A 00:11:36.130 Multiple Update Detection Support: N/A 00:11:36.130 Firmware Update Granularity: No Information Provided 00:11:36.130 Per-Namespace SMART Log: No 00:11:36.130 Asymmetric Namespace Access Log Page: Not Supported 00:11:36.130 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:36.130 Command Effects Log Page: Supported 00:11:36.130 Get Log Page Extended Data: Supported 00:11:36.130 Telemetry Log Pages: Not Supported 00:11:36.130 Persistent Event Log Pages: Not Supported 00:11:36.130 Supported Log Pages Log Page: May Support 00:11:36.130 Commands Supported & Effects Log Page: Not Supported 00:11:36.130 Feature Identifiers & Effects Log Page:May Support 00:11:36.130 NVMe-MI Commands & Effects Log Page: May Support 00:11:36.130 Data Area 4 for Telemetry Log: Not Supported 00:11:36.130 Error Log Page Entries Supported: 128 00:11:36.130 Keep Alive: Supported 00:11:36.130 Keep Alive Granularity: 10000 ms 00:11:36.130 00:11:36.130 NVM Command Set Attributes 00:11:36.130 ========================== 00:11:36.130 Submission Queue Entry Size 00:11:36.130 Max: 64 00:11:36.130 Min: 64 00:11:36.130 Completion Queue Entry Size 00:11:36.130 Max: 16 00:11:36.130 Min: 16 00:11:36.130 Number of Namespaces: 32 00:11:36.130 Compare Command: Supported 00:11:36.130 Write Uncorrectable Command: Not Supported 00:11:36.130 Dataset Management Command: Supported 00:11:36.130 Write Zeroes Command: Supported 00:11:36.130 Set Features Save Field: Not Supported 00:11:36.130 Reservations: Not Supported 00:11:36.130 Timestamp: Not Supported 00:11:36.130 Copy: Supported 00:11:36.130 Volatile Write Cache: Present 00:11:36.130 Atomic Write Unit (Normal): 1 00:11:36.130 Atomic Write Unit (PFail): 1 00:11:36.130 Atomic Compare & Write Unit: 1 00:11:36.130 Fused Compare & Write: Supported 00:11:36.130 Scatter-Gather List 00:11:36.130 SGL Command Set: Supported (Dword aligned) 00:11:36.130 SGL Keyed: Not Supported 00:11:36.130 SGL Bit Bucket Descriptor: Not Supported 00:11:36.130 SGL Metadata Pointer: Not Supported 00:11:36.130 Oversized SGL: Not Supported 00:11:36.130 SGL Metadata Address: Not Supported 00:11:36.130 SGL Offset: Not Supported 00:11:36.130 Transport SGL Data Block: Not Supported 00:11:36.130 Replay Protected Memory Block: Not Supported 00:11:36.130 00:11:36.130 Firmware Slot Information 00:11:36.130 ========================= 00:11:36.130 Active slot: 1 00:11:36.130 Slot 1 Firmware Revision: 24.09 00:11:36.130 00:11:36.130 00:11:36.130 Commands Supported and Effects 00:11:36.130 ============================== 00:11:36.130 Admin Commands 00:11:36.130 -------------- 00:11:36.130 Get Log Page (02h): Supported 00:11:36.130 Identify (06h): Supported 00:11:36.130 Abort (08h): Supported 00:11:36.130 Set Features (09h): Supported 00:11:36.130 Get Features (0Ah): Supported 00:11:36.130 Asynchronous Event Request (0Ch): Supported 00:11:36.130 Keep Alive (18h): Supported 00:11:36.130 I/O Commands 00:11:36.130 ------------ 00:11:36.130 Flush (00h): Supported LBA-Change 00:11:36.130 Write (01h): Supported LBA-Change 00:11:36.130 Read (02h): Supported 00:11:36.130 Compare (05h): Supported 00:11:36.130 Write Zeroes (08h): Supported LBA-Change 00:11:36.130 Dataset Management (09h): Supported LBA-Change 00:11:36.130 Copy (19h): Supported LBA-Change 00:11:36.130 00:11:36.130 Error Log 00:11:36.130 ========= 00:11:36.130 00:11:36.130 Arbitration 00:11:36.130 =========== 00:11:36.130 Arbitration Burst: 1 00:11:36.130 00:11:36.130 Power Management 00:11:36.130 ================ 00:11:36.130 Number of Power States: 1 00:11:36.130 Current Power State: Power State #0 00:11:36.130 Power State #0: 00:11:36.130 Max Power: 0.00 W 00:11:36.130 Non-Operational State: Operational 00:11:36.130 Entry Latency: Not Reported 00:11:36.130 Exit Latency: Not Reported 00:11:36.130 Relative Read Throughput: 0 00:11:36.130 Relative Read Latency: 0 00:11:36.130 Relative Write Throughput: 0 00:11:36.130 Relative Write Latency: 0 00:11:36.130 Idle Power: Not Reported 00:11:36.130 Active Power: Not Reported 00:11:36.130 Non-Operational Permissive Mode: Not Supported 00:11:36.130 00:11:36.130 Health Information 00:11:36.130 ================== 00:11:36.130 Critical Warnings: 00:11:36.130 Available Spare Space: OK 00:11:36.130 Temperature: OK 00:11:36.130 Device Reliability: OK 00:11:36.130 Read Only: No 00:11:36.130 Volatile Memory Backup: OK 00:11:36.130 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:36.130 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:36.130 Available Spare: 0% 00:11:36.130 Available Sp[2024-07-15 12:46:07.050809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:36.130 [2024-07-15 12:46:07.050818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:36.130 [2024-07-15 12:46:07.050843] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:36.130 [2024-07-15 12:46:07.050851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.130 [2024-07-15 12:46:07.050856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.130 [2024-07-15 12:46:07.050862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.130 [2024-07-15 12:46:07.050867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.130 [2024-07-15 12:46:07.051039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:36.130 [2024-07-15 12:46:07.051048] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:36.130 [2024-07-15 12:46:07.052041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:36.130 [2024-07-15 12:46:07.052090] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:36.130 [2024-07-15 12:46:07.052096] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:36.130 [2024-07-15 12:46:07.053049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:36.130 [2024-07-15 12:46:07.053058] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:36.130 [2024-07-15 12:46:07.053106] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:36.130 [2024-07-15 12:46:07.057232] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:36.389 are Threshold: 0% 00:11:36.389 Life Percentage Used: 0% 00:11:36.389 Data Units Read: 0 00:11:36.389 Data Units Written: 0 00:11:36.389 Host Read Commands: 0 00:11:36.389 Host Write Commands: 0 00:11:36.389 Controller Busy Time: 0 minutes 00:11:36.389 Power Cycles: 0 00:11:36.389 Power On Hours: 0 hours 00:11:36.389 Unsafe Shutdowns: 0 00:11:36.389 Unrecoverable Media Errors: 0 00:11:36.389 Lifetime Error Log Entries: 0 00:11:36.389 Warning Temperature Time: 0 minutes 00:11:36.389 Critical Temperature Time: 0 minutes 00:11:36.389 00:11:36.389 Number of Queues 00:11:36.389 ================ 00:11:36.389 Number of I/O Submission Queues: 127 00:11:36.389 Number of I/O Completion Queues: 127 00:11:36.389 00:11:36.389 Active Namespaces 00:11:36.389 ================= 00:11:36.389 Namespace ID:1 00:11:36.389 Error Recovery Timeout: Unlimited 00:11:36.389 Command Set Identifier: NVM (00h) 00:11:36.389 Deallocate: Supported 00:11:36.389 Deallocated/Unwritten Error: Not Supported 00:11:36.389 Deallocated Read Value: Unknown 00:11:36.389 Deallocate in Write Zeroes: Not Supported 00:11:36.389 Deallocated Guard Field: 0xFFFF 00:11:36.389 Flush: Supported 00:11:36.389 Reservation: Supported 00:11:36.389 Namespace Sharing Capabilities: Multiple Controllers 00:11:36.389 Size (in LBAs): 131072 (0GiB) 00:11:36.389 Capacity (in LBAs): 131072 (0GiB) 00:11:36.389 Utilization (in LBAs): 131072 (0GiB) 00:11:36.389 NGUID: FDC16E27FE2F41A99DAEC19CCE63EF93 00:11:36.389 UUID: fdc16e27-fe2f-41a9-9dae-c19cce63ef93 00:11:36.389 Thin Provisioning: Not Supported 00:11:36.389 Per-NS Atomic Units: Yes 00:11:36.389 Atomic Boundary Size (Normal): 0 00:11:36.389 Atomic Boundary Size (PFail): 0 00:11:36.389 Atomic Boundary Offset: 0 00:11:36.389 Maximum Single Source Range Length: 65535 00:11:36.389 Maximum Copy Length: 65535 00:11:36.389 Maximum Source Range Count: 1 00:11:36.389 NGUID/EUI64 Never Reused: No 00:11:36.389 Namespace Write Protected: No 00:11:36.389 Number of LBA Formats: 1 00:11:36.389 Current LBA Format: LBA Format #00 00:11:36.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.389 00:11:36.389 12:46:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:36.389 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.389 [2024-07-15 12:46:07.273023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:41.661 Initializing NVMe Controllers 00:11:41.661 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:41.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:41.661 Initialization complete. Launching workers. 00:11:41.661 ======================================================== 00:11:41.661 Latency(us) 00:11:41.661 Device Information : IOPS MiB/s Average min max 00:11:41.661 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39915.35 155.92 3206.38 963.68 7608.28 00:11:41.661 ======================================================== 00:11:41.661 Total : 39915.35 155.92 3206.38 963.68 7608.28 00:11:41.661 00:11:41.661 [2024-07-15 12:46:12.291265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:41.661 12:46:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:41.661 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.661 [2024-07-15 12:46:12.507278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:46.931 Initializing NVMe Controllers 00:11:46.931 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:46.931 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:46.931 Initialization complete. Launching workers. 00:11:46.931 ======================================================== 00:11:46.931 Latency(us) 00:11:46.931 Device Information : IOPS MiB/s Average min max 00:11:46.931 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.28 62.65 7979.71 7577.96 8002.55 00:11:46.931 ======================================================== 00:11:46.931 Total : 16039.28 62.65 7979.71 7577.96 8002.55 00:11:46.931 00:11:46.931 [2024-07-15 12:46:17.542476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:46.931 12:46:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:46.931 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.931 [2024-07-15 12:46:17.732431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:52.205 [2024-07-15 12:46:22.800534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:52.205 Initializing NVMe Controllers 00:11:52.205 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:52.205 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:52.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:52.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:52.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:52.205 Initialization complete. Launching workers. 00:11:52.205 Starting thread on core 2 00:11:52.205 Starting thread on core 3 00:11:52.205 Starting thread on core 1 00:11:52.205 12:46:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:52.205 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.205 [2024-07-15 12:46:23.079758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.395 [2024-07-15 12:46:26.711830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.395 Initializing NVMe Controllers 00:11:56.395 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.395 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.395 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:56.395 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:56.395 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:56.395 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:56.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:56.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:56.395 Initialization complete. Launching workers. 00:11:56.395 Starting thread on core 1 with urgent priority queue 00:11:56.395 Starting thread on core 2 with urgent priority queue 00:11:56.395 Starting thread on core 3 with urgent priority queue 00:11:56.395 Starting thread on core 0 with urgent priority queue 00:11:56.395 SPDK bdev Controller (SPDK1 ) core 0: 5067.67 IO/s 19.73 secs/100000 ios 00:11:56.395 SPDK bdev Controller (SPDK1 ) core 1: 5148.33 IO/s 19.42 secs/100000 ios 00:11:56.395 SPDK bdev Controller (SPDK1 ) core 2: 4723.67 IO/s 21.17 secs/100000 ios 00:11:56.395 SPDK bdev Controller (SPDK1 ) core 3: 5012.67 IO/s 19.95 secs/100000 ios 00:11:56.395 ======================================================== 00:11:56.395 00:11:56.395 12:46:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:56.395 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.395 [2024-07-15 12:46:26.983652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.395 Initializing NVMe Controllers 00:11:56.395 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.395 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:56.395 Namespace ID: 1 size: 0GB 00:11:56.395 Initialization complete. 00:11:56.395 INFO: using host memory buffer for IO 00:11:56.395 Hello world! 00:11:56.395 [2024-07-15 12:46:27.017861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.395 12:46:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:56.395 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.395 [2024-07-15 12:46:27.284301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.773 Initializing NVMe Controllers 00:11:57.773 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.773 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.773 Initialization complete. Launching workers. 00:11:57.773 submit (in ns) avg, min, max = 5928.0, 3276.5, 4000028.7 00:11:57.773 complete (in ns) avg, min, max = 21124.7, 1812.2, 3998756.5 00:11:57.773 00:11:57.773 Submit histogram 00:11:57.773 ================ 00:11:57.773 Range in us Cumulative Count 00:11:57.773 3.270 - 3.283: 0.0061% ( 1) 00:11:57.773 3.283 - 3.297: 0.1222% ( 19) 00:11:57.773 3.297 - 3.311: 1.1185% ( 163) 00:11:57.773 3.311 - 3.325: 2.9888% ( 306) 00:11:57.773 3.325 - 3.339: 5.9776% ( 489) 00:11:57.773 3.339 - 3.353: 9.9505% ( 650) 00:11:57.773 3.353 - 3.367: 15.3169% ( 878) 00:11:57.773 3.367 - 3.381: 20.8728% ( 909) 00:11:57.773 3.381 - 3.395: 26.7099% ( 955) 00:11:57.773 3.395 - 3.409: 32.5163% ( 950) 00:11:57.773 3.409 - 3.423: 37.8522% ( 873) 00:11:57.773 3.423 - 3.437: 42.3568% ( 737) 00:11:57.773 3.437 - 3.450: 47.1670% ( 787) 00:11:57.773 3.450 - 3.464: 53.4136% ( 1022) 00:11:57.773 3.464 - 3.478: 58.7372% ( 871) 00:11:57.773 3.478 - 3.492: 63.2296% ( 735) 00:11:57.773 3.492 - 3.506: 68.6755% ( 891) 00:11:57.773 3.506 - 3.520: 73.9319% ( 860) 00:11:57.773 3.520 - 3.534: 77.6358% ( 606) 00:11:57.773 3.534 - 3.548: 80.9669% ( 545) 00:11:57.773 3.548 - 3.562: 83.5035% ( 415) 00:11:57.773 3.562 - 3.590: 86.2600% ( 451) 00:11:57.773 3.590 - 3.617: 87.5558% ( 212) 00:11:57.773 3.617 - 3.645: 89.0166% ( 239) 00:11:57.773 3.645 - 3.673: 90.7402% ( 282) 00:11:57.773 3.673 - 3.701: 92.5310% ( 293) 00:11:57.773 3.701 - 3.729: 94.2607% ( 283) 00:11:57.773 3.729 - 3.757: 95.9599% ( 278) 00:11:57.773 3.757 - 3.784: 97.3046% ( 220) 00:11:57.773 3.784 - 3.812: 98.2458% ( 154) 00:11:57.773 3.812 - 3.840: 98.8632% ( 101) 00:11:57.773 3.840 - 3.868: 99.2788% ( 68) 00:11:57.773 3.868 - 3.896: 99.4560% ( 29) 00:11:57.773 3.896 - 3.923: 99.5538% ( 16) 00:11:57.773 3.923 - 3.951: 99.5844% ( 5) 00:11:57.773 3.951 - 3.979: 99.6027% ( 3) 00:11:57.773 3.979 - 4.007: 99.6088% ( 1) 00:11:57.773 4.146 - 4.174: 99.6149% ( 1) 00:11:57.773 5.120 - 5.148: 99.6272% ( 2) 00:11:57.773 5.148 - 5.176: 99.6333% ( 1) 00:11:57.773 5.259 - 5.287: 99.6394% ( 1) 00:11:57.773 5.287 - 5.315: 99.6455% ( 1) 00:11:57.773 5.343 - 5.370: 99.6577% ( 2) 00:11:57.773 5.370 - 5.398: 99.6761% ( 3) 00:11:57.773 5.398 - 5.426: 99.6944% ( 3) 00:11:57.773 5.426 - 5.454: 99.7066% ( 2) 00:11:57.773 5.454 - 5.482: 99.7188% ( 2) 00:11:57.774 5.482 - 5.510: 99.7250% ( 1) 00:11:57.774 5.510 - 5.537: 99.7372% ( 2) 00:11:57.774 5.537 - 5.565: 99.7433% ( 1) 00:11:57.774 5.565 - 5.593: 99.7494% ( 1) 00:11:57.774 5.621 - 5.649: 99.7616% ( 2) 00:11:57.774 5.649 - 5.677: 99.7739% ( 2) 00:11:57.774 5.677 - 5.704: 99.7861% ( 2) 00:11:57.774 5.899 - 5.927: 99.7983% ( 2) 00:11:57.774 5.927 - 5.955: 99.8044% ( 1) 00:11:57.774 5.955 - 5.983: 99.8166% ( 2) 00:11:57.774 5.983 - 6.010: 99.8227% ( 1) 00:11:57.774 6.094 - 6.122: 99.8289% ( 1) 00:11:57.774 6.122 - 6.150: 99.8350% ( 1) 00:11:57.774 6.205 - 6.233: 99.8472% ( 2) 00:11:57.774 6.372 - 6.400: 99.8533% ( 1) 00:11:57.774 6.734 - 6.762: 99.8594% ( 1) 00:11:57.774 6.790 - 6.817: 99.8655% ( 1) 00:11:57.774 6.957 - 6.984: 99.8716% ( 1) 00:11:57.774 7.179 - 7.235: 99.8839% ( 2) 00:11:57.774 7.235 - 7.290: 99.8900% ( 1) 00:11:57.774 7.346 - 7.402: 99.8961% ( 1) 00:11:57.774 7.736 - 7.791: 99.9022% ( 1) 00:11:57.774 7.958 - 8.014: 99.9083% ( 1) 00:11:57.774 8.403 - 8.459: 99.9144% ( 1) 00:11:57.774 8.849 - 8.904: 99.9205% ( 1) 00:11:57.774 9.183 - 9.238: 99.9267% ( 1) 00:11:57.774 9.628 - 9.683: 99.9328% ( 1) 00:11:57.774 40.737 - 40.960: 99.9389% ( 1) 00:11:57.774 3989.148 - 4017.642: 100.0000% ( 10) 00:11:57.774 00:11:57.774 Complete histogram 00:11:57.774 ================== 00:11:57.774 Range in us Cumulative Count 00:11:57.774 1.809 - 1.823: 0.0550% ( 9) 00:11:57.774 1.823 - [2024-07-15 12:46:28.307166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.774 1.837: 0.3728% ( 52) 00:11:57.774 1.837 - 1.850: 1.2530% ( 144) 00:11:57.774 1.850 - 1.864: 2.7993% ( 253) 00:11:57.774 1.864 - 1.878: 27.7367% ( 4080) 00:11:57.774 1.878 - 1.892: 79.1150% ( 8406) 00:11:57.774 1.892 - 1.906: 92.9405% ( 2262) 00:11:57.774 1.906 - 1.920: 95.7215% ( 455) 00:11:57.774 1.920 - 1.934: 96.2472% ( 86) 00:11:57.774 1.934 - 1.948: 96.9134% ( 109) 00:11:57.774 1.948 - 1.962: 98.2153% ( 213) 00:11:57.774 1.962 - 1.976: 98.9976% ( 128) 00:11:57.774 1.976 - 1.990: 99.2299% ( 38) 00:11:57.774 1.990 - 2.003: 99.2604% ( 5) 00:11:57.774 2.003 - 2.017: 99.2788% ( 3) 00:11:57.774 2.017 - 2.031: 99.2910% ( 2) 00:11:57.774 2.031 - 2.045: 99.2971% ( 1) 00:11:57.774 2.045 - 2.059: 99.3154% ( 3) 00:11:57.774 2.073 - 2.087: 99.3216% ( 1) 00:11:57.774 2.115 - 2.129: 99.3277% ( 1) 00:11:57.774 2.254 - 2.268: 99.3338% ( 1) 00:11:57.774 3.673 - 3.701: 99.3399% ( 1) 00:11:57.774 3.729 - 3.757: 99.3460% ( 1) 00:11:57.774 3.757 - 3.784: 99.3521% ( 1) 00:11:57.774 3.784 - 3.812: 99.3582% ( 1) 00:11:57.774 3.840 - 3.868: 99.3643% ( 1) 00:11:57.774 3.868 - 3.896: 99.3705% ( 1) 00:11:57.774 3.923 - 3.951: 99.3766% ( 1) 00:11:57.774 3.951 - 3.979: 99.3888% ( 2) 00:11:57.774 4.035 - 4.063: 99.3949% ( 1) 00:11:57.774 4.063 - 4.090: 99.4010% ( 1) 00:11:57.774 4.090 - 4.118: 99.4071% ( 1) 00:11:57.774 4.536 - 4.563: 99.4132% ( 1) 00:11:57.774 4.563 - 4.591: 99.4194% ( 1) 00:11:57.774 4.591 - 4.619: 99.4255% ( 1) 00:11:57.774 4.703 - 4.730: 99.4316% ( 1) 00:11:57.774 4.814 - 4.842: 99.4377% ( 1) 00:11:57.774 5.064 - 5.092: 99.4438% ( 1) 00:11:57.774 5.092 - 5.120: 99.4560% ( 2) 00:11:57.774 5.203 - 5.231: 99.4621% ( 1) 00:11:57.774 6.038 - 6.066: 99.4682% ( 1) 00:11:57.774 6.066 - 6.094: 99.4744% ( 1) 00:11:57.774 6.372 - 6.400: 99.4805% ( 1) 00:11:57.774 6.483 - 6.511: 99.4866% ( 1) 00:11:57.774 6.567 - 6.595: 99.4927% ( 1) 00:11:57.774 7.179 - 7.235: 99.4988% ( 1) 00:11:57.774 8.125 - 8.181: 99.5049% ( 1) 00:11:57.774 12.744 - 12.800: 99.5110% ( 1) 00:11:57.774 17.586 - 17.697: 99.5171% ( 1) 00:11:57.774 3134.330 - 3148.577: 99.5233% ( 1) 00:11:57.774 3989.148 - 4017.642: 100.0000% ( 78) 00:11:57.774 00:11:57.774 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:57.774 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:57.774 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:57.774 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:57.774 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:57.774 [ 00:11:57.774 { 00:11:57.774 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.774 "subtype": "Discovery", 00:11:57.774 "listen_addresses": [], 00:11:57.774 "allow_any_host": true, 00:11:57.774 "hosts": [] 00:11:57.774 }, 00:11:57.774 { 00:11:57.774 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:57.774 "subtype": "NVMe", 00:11:57.774 "listen_addresses": [ 00:11:57.774 { 00:11:57.774 "trtype": "VFIOUSER", 00:11:57.774 "adrfam": "IPv4", 00:11:57.774 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:57.774 "trsvcid": "0" 00:11:57.774 } 00:11:57.774 ], 00:11:57.774 "allow_any_host": true, 00:11:57.774 "hosts": [], 00:11:57.774 "serial_number": "SPDK1", 00:11:57.774 "model_number": "SPDK bdev Controller", 00:11:57.774 "max_namespaces": 32, 00:11:57.774 "min_cntlid": 1, 00:11:57.774 "max_cntlid": 65519, 00:11:57.774 "namespaces": [ 00:11:57.774 { 00:11:57.774 "nsid": 1, 00:11:57.774 "bdev_name": "Malloc1", 00:11:57.774 "name": "Malloc1", 00:11:57.774 "nguid": "FDC16E27FE2F41A99DAEC19CCE63EF93", 00:11:57.774 "uuid": "fdc16e27-fe2f-41a9-9dae-c19cce63ef93" 00:11:57.774 } 00:11:57.774 ] 00:11:57.774 }, 00:11:57.774 { 00:11:57.774 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:57.775 "subtype": "NVMe", 00:11:57.775 "listen_addresses": [ 00:11:57.775 { 00:11:57.775 "trtype": "VFIOUSER", 00:11:57.775 "adrfam": "IPv4", 00:11:57.775 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:57.775 "trsvcid": "0" 00:11:57.775 } 00:11:57.775 ], 00:11:57.775 "allow_any_host": true, 00:11:57.775 "hosts": [], 00:11:57.775 "serial_number": "SPDK2", 00:11:57.775 "model_number": "SPDK bdev Controller", 00:11:57.775 "max_namespaces": 32, 00:11:57.775 "min_cntlid": 1, 00:11:57.775 "max_cntlid": 65519, 00:11:57.775 "namespaces": [ 00:11:57.775 { 00:11:57.775 "nsid": 1, 00:11:57.775 "bdev_name": "Malloc2", 00:11:57.775 "name": "Malloc2", 00:11:57.775 "nguid": "E8C56628714148539B102F8E2B81D541", 00:11:57.775 "uuid": "e8c56628-7141-4853-9b10-2f8e2b81d541" 00:11:57.775 } 00:11:57.775 ] 00:11:57.775 } 00:11:57.775 ] 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1640798 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:57.775 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:57.775 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.775 [2024-07-15 12:46:28.682676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:58.086 Malloc3 00:11:58.086 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:58.086 [2024-07-15 12:46:28.901334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:58.086 12:46:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.086 Asynchronous Event Request test 00:11:58.086 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.086 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.086 Registering asynchronous event callbacks... 00:11:58.086 Starting namespace attribute notice tests for all controllers... 00:11:58.086 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:58.086 aer_cb - Changed Namespace 00:11:58.086 Cleaning up... 00:11:58.349 [ 00:11:58.349 { 00:11:58.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.349 "subtype": "Discovery", 00:11:58.349 "listen_addresses": [], 00:11:58.349 "allow_any_host": true, 00:11:58.349 "hosts": [] 00:11:58.349 }, 00:11:58.349 { 00:11:58.349 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.349 "subtype": "NVMe", 00:11:58.349 "listen_addresses": [ 00:11:58.349 { 00:11:58.349 "trtype": "VFIOUSER", 00:11:58.349 "adrfam": "IPv4", 00:11:58.349 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.350 "trsvcid": "0" 00:11:58.350 } 00:11:58.350 ], 00:11:58.350 "allow_any_host": true, 00:11:58.350 "hosts": [], 00:11:58.350 "serial_number": "SPDK1", 00:11:58.350 "model_number": "SPDK bdev Controller", 00:11:58.350 "max_namespaces": 32, 00:11:58.350 "min_cntlid": 1, 00:11:58.350 "max_cntlid": 65519, 00:11:58.350 "namespaces": [ 00:11:58.350 { 00:11:58.350 "nsid": 1, 00:11:58.350 "bdev_name": "Malloc1", 00:11:58.350 "name": "Malloc1", 00:11:58.350 "nguid": "FDC16E27FE2F41A99DAEC19CCE63EF93", 00:11:58.350 "uuid": "fdc16e27-fe2f-41a9-9dae-c19cce63ef93" 00:11:58.350 }, 00:11:58.350 { 00:11:58.350 "nsid": 2, 00:11:58.350 "bdev_name": "Malloc3", 00:11:58.350 "name": "Malloc3", 00:11:58.350 "nguid": "938D83FD210446199E587038651A9416", 00:11:58.350 "uuid": "938d83fd-2104-4619-9e58-7038651a9416" 00:11:58.350 } 00:11:58.350 ] 00:11:58.350 }, 00:11:58.350 { 00:11:58.350 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.350 "subtype": "NVMe", 00:11:58.350 "listen_addresses": [ 00:11:58.350 { 00:11:58.350 "trtype": "VFIOUSER", 00:11:58.350 "adrfam": "IPv4", 00:11:58.350 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.350 "trsvcid": "0" 00:11:58.350 } 00:11:58.350 ], 00:11:58.350 "allow_any_host": true, 00:11:58.350 "hosts": [], 00:11:58.350 "serial_number": "SPDK2", 00:11:58.350 "model_number": "SPDK bdev Controller", 00:11:58.350 "max_namespaces": 32, 00:11:58.350 "min_cntlid": 1, 00:11:58.350 "max_cntlid": 65519, 00:11:58.350 "namespaces": [ 00:11:58.350 { 00:11:58.350 "nsid": 1, 00:11:58.350 "bdev_name": "Malloc2", 00:11:58.350 "name": "Malloc2", 00:11:58.350 "nguid": "E8C56628714148539B102F8E2B81D541", 00:11:58.350 "uuid": "e8c56628-7141-4853-9b10-2f8e2b81d541" 00:11:58.350 } 00:11:58.350 ] 00:11:58.350 } 00:11:58.350 ] 00:11:58.350 12:46:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1640798 00:11:58.350 12:46:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.350 12:46:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:58.350 12:46:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:58.350 12:46:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:58.350 [2024-07-15 12:46:29.132496] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:58.350 [2024-07-15 12:46:29.132534] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640952 ] 00:11:58.350 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.350 [2024-07-15 12:46:29.162623] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:58.350 [2024-07-15 12:46:29.165084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:58.350 [2024-07-15 12:46:29.165105] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9b26e02000 00:11:58.350 [2024-07-15 12:46:29.166086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.167088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.168090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.169098] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.170107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.171114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.172122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.173131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.350 [2024-07-15 12:46:29.174141] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:58.350 [2024-07-15 12:46:29.174150] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9b26df7000 00:11:58.350 [2024-07-15 12:46:29.175090] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:58.350 [2024-07-15 12:46:29.183607] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:58.350 [2024-07-15 12:46:29.183631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:58.350 [2024-07-15 12:46:29.188707] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:58.350 [2024-07-15 12:46:29.188742] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:58.350 [2024-07-15 12:46:29.188809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:58.350 [2024-07-15 12:46:29.188827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:58.350 [2024-07-15 12:46:29.188832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:58.350 [2024-07-15 12:46:29.189713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:58.350 [2024-07-15 12:46:29.189721] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:58.350 [2024-07-15 12:46:29.189727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:58.350 [2024-07-15 12:46:29.190717] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:58.350 [2024-07-15 12:46:29.190727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:58.350 [2024-07-15 12:46:29.190733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:58.350 [2024-07-15 12:46:29.191720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:58.350 [2024-07-15 12:46:29.191729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:58.350 [2024-07-15 12:46:29.192728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:58.350 [2024-07-15 12:46:29.192736] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:58.350 [2024-07-15 12:46:29.192741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:58.350 [2024-07-15 12:46:29.192746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:58.350 [2024-07-15 12:46:29.192851] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:58.350 [2024-07-15 12:46:29.192855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:58.350 [2024-07-15 12:46:29.192860] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:58.350 [2024-07-15 12:46:29.193742] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:58.350 [2024-07-15 12:46:29.194748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:58.350 [2024-07-15 12:46:29.195762] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:58.350 [2024-07-15 12:46:29.196764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.350 [2024-07-15 12:46:29.196800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:58.350 [2024-07-15 12:46:29.197778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:58.350 [2024-07-15 12:46:29.197786] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:58.350 [2024-07-15 12:46:29.197790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.197809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:58.350 [2024-07-15 12:46:29.197816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.197826] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:58.350 [2024-07-15 12:46:29.197831] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.350 [2024-07-15 12:46:29.197842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.350 [2024-07-15 12:46:29.205231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:58.350 [2024-07-15 12:46:29.205241] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:58.350 [2024-07-15 12:46:29.205248] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:58.350 [2024-07-15 12:46:29.205252] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:58.350 [2024-07-15 12:46:29.205256] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:58.350 [2024-07-15 12:46:29.205260] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:58.350 [2024-07-15 12:46:29.205264] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:58.350 [2024-07-15 12:46:29.205268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.205275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.205285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:58.350 [2024-07-15 12:46:29.213231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:58.350 [2024-07-15 12:46:29.213245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.350 [2024-07-15 12:46:29.213253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.350 [2024-07-15 12:46:29.213260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.350 [2024-07-15 12:46:29.213268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.350 [2024-07-15 12:46:29.213272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.213279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.213287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:58.350 [2024-07-15 12:46:29.220231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:58.350 [2024-07-15 12:46:29.220239] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:58.350 [2024-07-15 12:46:29.220246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.220252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.220257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:58.350 [2024-07-15 12:46:29.220265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:58.350 [2024-07-15 12:46:29.229229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.229281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.229289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.229296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:58.351 [2024-07-15 12:46:29.229300] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:58.351 [2024-07-15 12:46:29.229305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.237229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.237239] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:58.351 [2024-07-15 12:46:29.237250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.237257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.237263] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:58.351 [2024-07-15 12:46:29.237267] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.351 [2024-07-15 12:46:29.237273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.245230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.245243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.245250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.245257] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:58.351 [2024-07-15 12:46:29.245261] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.351 [2024-07-15 12:46:29.245267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.253229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.253238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253272] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:58.351 [2024-07-15 12:46:29.253276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:58.351 [2024-07-15 12:46:29.253281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:58.351 [2024-07-15 12:46:29.253296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.261230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.261242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.269230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.269241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.277229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.277240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.285230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.285244] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:58.351 [2024-07-15 12:46:29.285248] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:58.351 [2024-07-15 12:46:29.285251] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:58.351 [2024-07-15 12:46:29.285254] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:58.351 [2024-07-15 12:46:29.285260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:58.351 [2024-07-15 12:46:29.285266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:58.351 [2024-07-15 12:46:29.285270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:58.351 [2024-07-15 12:46:29.285275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.285281] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:58.351 [2024-07-15 12:46:29.285285] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.351 [2024-07-15 12:46:29.285290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.285299] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:58.351 [2024-07-15 12:46:29.285303] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:58.351 [2024-07-15 12:46:29.285308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:58.351 [2024-07-15 12:46:29.293229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.293242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.293251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:58.351 [2024-07-15 12:46:29.293257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:58.351 ===================================================== 00:11:58.351 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:58.351 ===================================================== 00:11:58.351 Controller Capabilities/Features 00:11:58.351 ================================ 00:11:58.351 Vendor ID: 4e58 00:11:58.351 Subsystem Vendor ID: 4e58 00:11:58.351 Serial Number: SPDK2 00:11:58.351 Model Number: SPDK bdev Controller 00:11:58.351 Firmware Version: 24.09 00:11:58.351 Recommended Arb Burst: 6 00:11:58.351 IEEE OUI Identifier: 8d 6b 50 00:11:58.351 Multi-path I/O 00:11:58.351 May have multiple subsystem ports: Yes 00:11:58.351 May have multiple controllers: Yes 00:11:58.351 Associated with SR-IOV VF: No 00:11:58.351 Max Data Transfer Size: 131072 00:11:58.351 Max Number of Namespaces: 32 00:11:58.351 Max Number of I/O Queues: 127 00:11:58.351 NVMe Specification Version (VS): 1.3 00:11:58.351 NVMe Specification Version (Identify): 1.3 00:11:58.351 Maximum Queue Entries: 256 00:11:58.351 Contiguous Queues Required: Yes 00:11:58.351 Arbitration Mechanisms Supported 00:11:58.351 Weighted Round Robin: Not Supported 00:11:58.351 Vendor Specific: Not Supported 00:11:58.351 Reset Timeout: 15000 ms 00:11:58.351 Doorbell Stride: 4 bytes 00:11:58.351 NVM Subsystem Reset: Not Supported 00:11:58.351 Command Sets Supported 00:11:58.351 NVM Command Set: Supported 00:11:58.351 Boot Partition: Not Supported 00:11:58.351 Memory Page Size Minimum: 4096 bytes 00:11:58.351 Memory Page Size Maximum: 4096 bytes 00:11:58.351 Persistent Memory Region: Not Supported 00:11:58.351 Optional Asynchronous Events Supported 00:11:58.351 Namespace Attribute Notices: Supported 00:11:58.351 Firmware Activation Notices: Not Supported 00:11:58.351 ANA Change Notices: Not Supported 00:11:58.351 PLE Aggregate Log Change Notices: Not Supported 00:11:58.351 LBA Status Info Alert Notices: Not Supported 00:11:58.351 EGE Aggregate Log Change Notices: Not Supported 00:11:58.351 Normal NVM Subsystem Shutdown event: Not Supported 00:11:58.351 Zone Descriptor Change Notices: Not Supported 00:11:58.351 Discovery Log Change Notices: Not Supported 00:11:58.351 Controller Attributes 00:11:58.351 128-bit Host Identifier: Supported 00:11:58.351 Non-Operational Permissive Mode: Not Supported 00:11:58.351 NVM Sets: Not Supported 00:11:58.351 Read Recovery Levels: Not Supported 00:11:58.351 Endurance Groups: Not Supported 00:11:58.351 Predictable Latency Mode: Not Supported 00:11:58.351 Traffic Based Keep ALive: Not Supported 00:11:58.351 Namespace Granularity: Not Supported 00:11:58.351 SQ Associations: Not Supported 00:11:58.351 UUID List: Not Supported 00:11:58.351 Multi-Domain Subsystem: Not Supported 00:11:58.351 Fixed Capacity Management: Not Supported 00:11:58.351 Variable Capacity Management: Not Supported 00:11:58.351 Delete Endurance Group: Not Supported 00:11:58.351 Delete NVM Set: Not Supported 00:11:58.351 Extended LBA Formats Supported: Not Supported 00:11:58.351 Flexible Data Placement Supported: Not Supported 00:11:58.351 00:11:58.351 Controller Memory Buffer Support 00:11:58.351 ================================ 00:11:58.351 Supported: No 00:11:58.351 00:11:58.351 Persistent Memory Region Support 00:11:58.351 ================================ 00:11:58.351 Supported: No 00:11:58.351 00:11:58.351 Admin Command Set Attributes 00:11:58.351 ============================ 00:11:58.351 Security Send/Receive: Not Supported 00:11:58.351 Format NVM: Not Supported 00:11:58.351 Firmware Activate/Download: Not Supported 00:11:58.351 Namespace Management: Not Supported 00:11:58.351 Device Self-Test: Not Supported 00:11:58.351 Directives: Not Supported 00:11:58.351 NVMe-MI: Not Supported 00:11:58.351 Virtualization Management: Not Supported 00:11:58.351 Doorbell Buffer Config: Not Supported 00:11:58.351 Get LBA Status Capability: Not Supported 00:11:58.351 Command & Feature Lockdown Capability: Not Supported 00:11:58.351 Abort Command Limit: 4 00:11:58.351 Async Event Request Limit: 4 00:11:58.351 Number of Firmware Slots: N/A 00:11:58.351 Firmware Slot 1 Read-Only: N/A 00:11:58.351 Firmware Activation Without Reset: N/A 00:11:58.351 Multiple Update Detection Support: N/A 00:11:58.351 Firmware Update Granularity: No Information Provided 00:11:58.351 Per-Namespace SMART Log: No 00:11:58.351 Asymmetric Namespace Access Log Page: Not Supported 00:11:58.351 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:58.351 Command Effects Log Page: Supported 00:11:58.351 Get Log Page Extended Data: Supported 00:11:58.351 Telemetry Log Pages: Not Supported 00:11:58.351 Persistent Event Log Pages: Not Supported 00:11:58.351 Supported Log Pages Log Page: May Support 00:11:58.351 Commands Supported & Effects Log Page: Not Supported 00:11:58.351 Feature Identifiers & Effects Log Page:May Support 00:11:58.351 NVMe-MI Commands & Effects Log Page: May Support 00:11:58.351 Data Area 4 for Telemetry Log: Not Supported 00:11:58.351 Error Log Page Entries Supported: 128 00:11:58.351 Keep Alive: Supported 00:11:58.351 Keep Alive Granularity: 10000 ms 00:11:58.351 00:11:58.351 NVM Command Set Attributes 00:11:58.351 ========================== 00:11:58.351 Submission Queue Entry Size 00:11:58.351 Max: 64 00:11:58.351 Min: 64 00:11:58.351 Completion Queue Entry Size 00:11:58.351 Max: 16 00:11:58.352 Min: 16 00:11:58.352 Number of Namespaces: 32 00:11:58.352 Compare Command: Supported 00:11:58.352 Write Uncorrectable Command: Not Supported 00:11:58.352 Dataset Management Command: Supported 00:11:58.352 Write Zeroes Command: Supported 00:11:58.352 Set Features Save Field: Not Supported 00:11:58.352 Reservations: Not Supported 00:11:58.352 Timestamp: Not Supported 00:11:58.352 Copy: Supported 00:11:58.352 Volatile Write Cache: Present 00:11:58.352 Atomic Write Unit (Normal): 1 00:11:58.352 Atomic Write Unit (PFail): 1 00:11:58.352 Atomic Compare & Write Unit: 1 00:11:58.352 Fused Compare & Write: Supported 00:11:58.352 Scatter-Gather List 00:11:58.352 SGL Command Set: Supported (Dword aligned) 00:11:58.352 SGL Keyed: Not Supported 00:11:58.352 SGL Bit Bucket Descriptor: Not Supported 00:11:58.352 SGL Metadata Pointer: Not Supported 00:11:58.352 Oversized SGL: Not Supported 00:11:58.352 SGL Metadata Address: Not Supported 00:11:58.352 SGL Offset: Not Supported 00:11:58.352 Transport SGL Data Block: Not Supported 00:11:58.352 Replay Protected Memory Block: Not Supported 00:11:58.352 00:11:58.352 Firmware Slot Information 00:11:58.352 ========================= 00:11:58.352 Active slot: 1 00:11:58.352 Slot 1 Firmware Revision: 24.09 00:11:58.352 00:11:58.352 00:11:58.352 Commands Supported and Effects 00:11:58.352 ============================== 00:11:58.352 Admin Commands 00:11:58.352 -------------- 00:11:58.352 Get Log Page (02h): Supported 00:11:58.352 Identify (06h): Supported 00:11:58.352 Abort (08h): Supported 00:11:58.352 Set Features (09h): Supported 00:11:58.352 Get Features (0Ah): Supported 00:11:58.352 Asynchronous Event Request (0Ch): Supported 00:11:58.352 Keep Alive (18h): Supported 00:11:58.352 I/O Commands 00:11:58.352 ------------ 00:11:58.352 Flush (00h): Supported LBA-Change 00:11:58.352 Write (01h): Supported LBA-Change 00:11:58.352 Read (02h): Supported 00:11:58.352 Compare (05h): Supported 00:11:58.352 Write Zeroes (08h): Supported LBA-Change 00:11:58.352 Dataset Management (09h): Supported LBA-Change 00:11:58.352 Copy (19h): Supported LBA-Change 00:11:58.352 00:11:58.352 Error Log 00:11:58.352 ========= 00:11:58.352 00:11:58.352 Arbitration 00:11:58.352 =========== 00:11:58.352 Arbitration Burst: 1 00:11:58.352 00:11:58.352 Power Management 00:11:58.352 ================ 00:11:58.352 Number of Power States: 1 00:11:58.352 Current Power State: Power State #0 00:11:58.352 Power State #0: 00:11:58.352 Max Power: 0.00 W 00:11:58.352 Non-Operational State: Operational 00:11:58.352 Entry Latency: Not Reported 00:11:58.352 Exit Latency: Not Reported 00:11:58.352 Relative Read Throughput: 0 00:11:58.352 Relative Read Latency: 0 00:11:58.352 Relative Write Throughput: 0 00:11:58.352 Relative Write Latency: 0 00:11:58.352 Idle Power: Not Reported 00:11:58.352 Active Power: Not Reported 00:11:58.352 Non-Operational Permissive Mode: Not Supported 00:11:58.352 00:11:58.352 Health Information 00:11:58.352 ================== 00:11:58.352 Critical Warnings: 00:11:58.352 Available Spare Space: OK 00:11:58.352 Temperature: OK 00:11:58.352 Device Reliability: OK 00:11:58.352 Read Only: No 00:11:58.352 Volatile Memory Backup: OK 00:11:58.352 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:58.352 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:58.352 Available Spare: 0% 00:11:58.352 Available Sp[2024-07-15 12:46:29.293337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:58.352 [2024-07-15 12:46:29.301230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:58.352 [2024-07-15 12:46:29.301260] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:58.352 [2024-07-15 12:46:29.301269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.352 [2024-07-15 12:46:29.301274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.352 [2024-07-15 12:46:29.301280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.352 [2024-07-15 12:46:29.301285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.352 [2024-07-15 12:46:29.301339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:58.352 [2024-07-15 12:46:29.301349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:58.352 [2024-07-15 12:46:29.302340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.352 [2024-07-15 12:46:29.302382] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:58.352 [2024-07-15 12:46:29.302388] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:58.611 [2024-07-15 12:46:29.303348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:58.611 [2024-07-15 12:46:29.303360] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:58.611 [2024-07-15 12:46:29.303405] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:58.611 [2024-07-15 12:46:29.306230] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:58.611 are Threshold: 0% 00:11:58.611 Life Percentage Used: 0% 00:11:58.611 Data Units Read: 0 00:11:58.611 Data Units Written: 0 00:11:58.611 Host Read Commands: 0 00:11:58.611 Host Write Commands: 0 00:11:58.611 Controller Busy Time: 0 minutes 00:11:58.611 Power Cycles: 0 00:11:58.611 Power On Hours: 0 hours 00:11:58.611 Unsafe Shutdowns: 0 00:11:58.611 Unrecoverable Media Errors: 0 00:11:58.611 Lifetime Error Log Entries: 0 00:11:58.611 Warning Temperature Time: 0 minutes 00:11:58.611 Critical Temperature Time: 0 minutes 00:11:58.611 00:11:58.611 Number of Queues 00:11:58.611 ================ 00:11:58.611 Number of I/O Submission Queues: 127 00:11:58.611 Number of I/O Completion Queues: 127 00:11:58.611 00:11:58.611 Active Namespaces 00:11:58.611 ================= 00:11:58.611 Namespace ID:1 00:11:58.611 Error Recovery Timeout: Unlimited 00:11:58.611 Command Set Identifier: NVM (00h) 00:11:58.611 Deallocate: Supported 00:11:58.611 Deallocated/Unwritten Error: Not Supported 00:11:58.611 Deallocated Read Value: Unknown 00:11:58.611 Deallocate in Write Zeroes: Not Supported 00:11:58.611 Deallocated Guard Field: 0xFFFF 00:11:58.611 Flush: Supported 00:11:58.611 Reservation: Supported 00:11:58.611 Namespace Sharing Capabilities: Multiple Controllers 00:11:58.611 Size (in LBAs): 131072 (0GiB) 00:11:58.611 Capacity (in LBAs): 131072 (0GiB) 00:11:58.611 Utilization (in LBAs): 131072 (0GiB) 00:11:58.611 NGUID: E8C56628714148539B102F8E2B81D541 00:11:58.611 UUID: e8c56628-7141-4853-9b10-2f8e2b81d541 00:11:58.611 Thin Provisioning: Not Supported 00:11:58.611 Per-NS Atomic Units: Yes 00:11:58.611 Atomic Boundary Size (Normal): 0 00:11:58.611 Atomic Boundary Size (PFail): 0 00:11:58.611 Atomic Boundary Offset: 0 00:11:58.611 Maximum Single Source Range Length: 65535 00:11:58.611 Maximum Copy Length: 65535 00:11:58.611 Maximum Source Range Count: 1 00:11:58.611 NGUID/EUI64 Never Reused: No 00:11:58.611 Namespace Write Protected: No 00:11:58.611 Number of LBA Formats: 1 00:11:58.611 Current LBA Format: LBA Format #00 00:11:58.611 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.611 00:11:58.611 12:46:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:58.611 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.611 [2024-07-15 12:46:29.519560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:03.883 Initializing NVMe Controllers 00:12:03.883 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:03.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:03.883 Initialization complete. Launching workers. 00:12:03.883 ======================================================== 00:12:03.883 Latency(us) 00:12:03.883 Device Information : IOPS MiB/s Average min max 00:12:03.883 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39861.50 155.71 3210.90 972.08 7250.46 00:12:03.883 ======================================================== 00:12:03.883 Total : 39861.50 155.71 3210.90 972.08 7250.46 00:12:03.883 00:12:03.883 [2024-07-15 12:46:34.624464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:03.883 12:46:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:03.883 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.142 [2024-07-15 12:46:34.840112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:09.418 Initializing NVMe Controllers 00:12:09.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:09.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:09.418 Initialization complete. Launching workers. 00:12:09.418 ======================================================== 00:12:09.418 Latency(us) 00:12:09.418 Device Information : IOPS MiB/s Average min max 00:12:09.418 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39933.07 155.99 3205.18 965.12 7187.50 00:12:09.418 ======================================================== 00:12:09.418 Total : 39933.07 155.99 3205.18 965.12 7187.50 00:12:09.418 00:12:09.418 [2024-07-15 12:46:39.863362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:09.418 12:46:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:09.418 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.418 [2024-07-15 12:46:40.048840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:14.689 [2024-07-15 12:46:45.196325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:14.689 Initializing NVMe Controllers 00:12:14.689 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:14.689 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:14.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:14.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:14.689 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:14.689 Initialization complete. Launching workers. 00:12:14.689 Starting thread on core 2 00:12:14.689 Starting thread on core 3 00:12:14.689 Starting thread on core 1 00:12:14.689 12:46:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:14.689 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.689 [2024-07-15 12:46:45.481651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.885 [2024-07-15 12:46:49.196535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.885 Initializing NVMe Controllers 00:12:18.885 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.885 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:18.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:18.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:18.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:18.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:18.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:18.885 Initialization complete. Launching workers. 00:12:18.885 Starting thread on core 1 with urgent priority queue 00:12:18.885 Starting thread on core 2 with urgent priority queue 00:12:18.885 Starting thread on core 3 with urgent priority queue 00:12:18.885 Starting thread on core 0 with urgent priority queue 00:12:18.885 SPDK bdev Controller (SPDK2 ) core 0: 4427.00 IO/s 22.59 secs/100000 ios 00:12:18.885 SPDK bdev Controller (SPDK2 ) core 1: 4411.67 IO/s 22.67 secs/100000 ios 00:12:18.885 SPDK bdev Controller (SPDK2 ) core 2: 3645.00 IO/s 27.43 secs/100000 ios 00:12:18.885 SPDK bdev Controller (SPDK2 ) core 3: 3899.00 IO/s 25.65 secs/100000 ios 00:12:18.885 ======================================================== 00:12:18.885 00:12:18.885 12:46:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.885 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.885 [2024-07-15 12:46:49.461658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.885 Initializing NVMe Controllers 00:12:18.885 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.885 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.885 Namespace ID: 1 size: 0GB 00:12:18.885 Initialization complete. 00:12:18.885 INFO: using host memory buffer for IO 00:12:18.885 Hello world! 00:12:18.885 [2024-07-15 12:46:49.469701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.885 12:46:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.885 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.885 [2024-07-15 12:46:49.747084] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.261 Initializing NVMe Controllers 00:12:20.261 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.261 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.261 Initialization complete. Launching workers. 00:12:20.261 submit (in ns) avg, min, max = 9455.7, 3200.9, 4994617.4 00:12:20.261 complete (in ns) avg, min, max = 19526.8, 1757.4, 5994187.8 00:12:20.261 00:12:20.261 Submit histogram 00:12:20.261 ================ 00:12:20.261 Range in us Cumulative Count 00:12:20.261 3.200 - 3.214: 0.0182% ( 3) 00:12:20.261 3.214 - 3.228: 0.0547% ( 6) 00:12:20.261 3.228 - 3.242: 0.0850% ( 5) 00:12:20.261 3.242 - 3.256: 0.2673% ( 30) 00:12:20.261 3.256 - 3.270: 0.4435% ( 29) 00:12:20.261 3.270 - 3.283: 0.7351% ( 48) 00:12:20.261 3.283 - 3.297: 1.8468% ( 183) 00:12:20.261 3.297 - 3.311: 3.4749% ( 268) 00:12:20.261 3.311 - 3.325: 5.9595% ( 409) 00:12:20.261 3.325 - 3.339: 10.0298% ( 670) 00:12:20.261 3.339 - 3.353: 15.0416% ( 825) 00:12:20.261 3.353 - 3.367: 20.6124% ( 917) 00:12:20.261 3.367 - 3.381: 26.7906% ( 1017) 00:12:20.261 3.381 - 3.395: 32.8656% ( 1000) 00:12:20.261 3.395 - 3.409: 37.8410% ( 819) 00:12:20.261 3.409 - 3.423: 42.4397% ( 757) 00:12:20.261 3.423 - 3.437: 47.4698% ( 828) 00:12:20.261 3.437 - 3.450: 52.4816% ( 825) 00:12:20.261 3.450 - 3.464: 56.4547% ( 654) 00:12:20.261 3.464 - 3.478: 60.5006% ( 666) 00:12:20.261 3.478 - 3.492: 66.8975% ( 1053) 00:12:20.261 3.492 - 3.506: 71.7575% ( 800) 00:12:20.261 3.506 - 3.520: 75.4025% ( 600) 00:12:20.261 3.520 - 3.534: 79.6246% ( 695) 00:12:20.261 3.534 - 3.548: 83.0569% ( 565) 00:12:20.261 3.548 - 3.562: 85.0495% ( 328) 00:12:20.261 3.562 - 3.590: 86.9024% ( 305) 00:12:20.261 3.590 - 3.617: 87.9108% ( 166) 00:12:20.261 3.617 - 3.645: 89.3081% ( 230) 00:12:20.261 3.645 - 3.673: 90.9301% ( 267) 00:12:20.261 3.673 - 3.701: 92.6554% ( 284) 00:12:20.261 3.701 - 3.729: 94.3867% ( 285) 00:12:20.261 3.729 - 3.757: 95.9662% ( 260) 00:12:20.261 3.757 - 3.784: 97.3999% ( 236) 00:12:20.261 3.784 - 3.812: 98.2686% ( 143) 00:12:20.261 3.812 - 3.840: 98.8822% ( 101) 00:12:20.261 3.840 - 3.868: 99.2103% ( 54) 00:12:20.261 3.868 - 3.896: 99.3682% ( 26) 00:12:20.261 3.896 - 3.923: 99.4593% ( 15) 00:12:20.261 3.923 - 3.951: 99.5019% ( 7) 00:12:20.261 3.951 - 3.979: 99.5079% ( 1) 00:12:20.261 4.007 - 4.035: 99.5262% ( 3) 00:12:20.261 4.146 - 4.174: 99.5322% ( 1) 00:12:20.261 5.259 - 5.287: 99.5383% ( 1) 00:12:20.261 5.537 - 5.565: 99.5444% ( 1) 00:12:20.261 5.565 - 5.593: 99.5505% ( 1) 00:12:20.261 5.593 - 5.621: 99.5565% ( 1) 00:12:20.261 5.621 - 5.649: 99.5687% ( 2) 00:12:20.261 5.704 - 5.732: 99.5748% ( 1) 00:12:20.261 6.038 - 6.066: 99.5808% ( 1) 00:12:20.261 6.094 - 6.122: 99.5869% ( 1) 00:12:20.261 6.372 - 6.400: 99.5930% ( 1) 00:12:20.261 6.456 - 6.483: 99.5991% ( 1) 00:12:20.261 6.623 - 6.650: 99.6051% ( 1) 00:12:20.261 6.678 - 6.706: 99.6112% ( 1) 00:12:20.261 6.706 - 6.734: 99.6173% ( 1) 00:12:20.261 6.734 - 6.762: 99.6234% ( 1) 00:12:20.261 6.790 - 6.817: 99.6294% ( 1) 00:12:20.261 6.817 - 6.845: 99.6537% ( 4) 00:12:20.262 6.873 - 6.901: 99.6659% ( 2) 00:12:20.262 7.012 - 7.040: 99.6780% ( 2) 00:12:20.262 7.096 - 7.123: 99.6841% ( 1) 00:12:20.262 7.123 - 7.179: 99.6963% ( 2) 00:12:20.262 7.346 - 7.402: 99.7023% ( 1) 00:12:20.262 7.402 - 7.457: 99.7084% ( 1) 00:12:20.262 7.569 - 7.624: 99.7206% ( 2) 00:12:20.262 7.624 - 7.680: 99.7266% ( 1) 00:12:20.262 7.791 - 7.847: 99.7327% ( 1) 00:12:20.262 7.958 - 8.014: 99.7388% ( 1) 00:12:20.262 8.070 - 8.125: 99.7449% ( 1) 00:12:20.262 8.125 - 8.181: 99.7631% ( 3) 00:12:20.262 8.181 - 8.237: 99.7692% ( 1) 00:12:20.262 8.237 - 8.292: 99.7813% ( 2) 00:12:20.262 8.292 - 8.348: 99.7874% ( 1) 00:12:20.262 8.459 - 8.515: 99.7935% ( 1) 00:12:20.262 8.682 - 8.737: 99.7995% ( 1) 00:12:20.262 8.793 - 8.849: 99.8056% ( 1) 00:12:20.262 8.904 - 8.960: 99.8178% ( 2) 00:12:20.262 9.294 - 9.350: 99.8238% ( 1) 00:12:20.262 [2024-07-15 12:46:50.841340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.262 10.908 - 10.963: 99.8299% ( 1) 00:12:20.262 10.963 - 11.019: 99.8360% ( 1) 00:12:20.262 15.917 - 16.028: 99.8421% ( 1) 00:12:20.262 19.144 - 19.256: 99.8481% ( 1) 00:12:20.262 2521.711 - 2535.958: 99.8542% ( 1) 00:12:20.262 3006.108 - 3020.355: 99.8603% ( 1) 00:12:20.262 3989.148 - 4017.642: 99.9939% ( 22) 00:12:20.262 4986.435 - 5014.929: 100.0000% ( 1) 00:12:20.262 00:12:20.262 Complete histogram 00:12:20.262 ================== 00:12:20.262 Range in us Cumulative Count 00:12:20.262 1.753 - 1.760: 0.0121% ( 2) 00:12:20.262 1.760 - 1.767: 0.2065% ( 32) 00:12:20.262 1.767 - 1.774: 0.3888% ( 30) 00:12:20.262 1.774 - 1.781: 0.5285% ( 23) 00:12:20.262 1.781 - 1.795: 0.6257% ( 16) 00:12:20.262 1.795 - 1.809: 1.4398% ( 134) 00:12:20.262 1.809 - 1.823: 10.1330% ( 1431) 00:12:20.262 1.823 - 1.837: 24.0812% ( 2296) 00:12:20.262 1.837 - 1.850: 27.9995% ( 645) 00:12:20.262 1.850 - 1.864: 52.8218% ( 4086) 00:12:20.262 1.864 - 1.878: 87.4066% ( 5693) 00:12:20.262 1.878 - 1.892: 93.4330% ( 992) 00:12:20.262 1.892 - 1.906: 95.8812% ( 403) 00:12:20.262 1.906 - 1.920: 96.7317% ( 140) 00:12:20.262 1.920 - 1.934: 97.3574% ( 103) 00:12:20.262 1.934 - 1.948: 98.1593% ( 132) 00:12:20.262 1.948 - 1.962: 98.8518% ( 114) 00:12:20.262 1.962 - 1.976: 98.9673% ( 19) 00:12:20.262 1.976 - 1.990: 99.0159% ( 8) 00:12:20.262 1.990 - 2.003: 99.0402% ( 4) 00:12:20.262 2.003 - 2.017: 99.0462% ( 1) 00:12:20.262 2.017 - 2.031: 99.0584% ( 2) 00:12:20.262 2.031 - 2.045: 99.1009% ( 7) 00:12:20.262 2.045 - 2.059: 99.2346% ( 22) 00:12:20.262 2.059 - 2.073: 99.3075% ( 12) 00:12:20.262 2.073 - 2.087: 99.3196% ( 2) 00:12:20.262 2.101 - 2.115: 99.3257% ( 1) 00:12:20.262 2.129 - 2.143: 99.3318% ( 1) 00:12:20.262 2.240 - 2.254: 99.3378% ( 1) 00:12:20.262 2.254 - 2.268: 99.3439% ( 1) 00:12:20.262 3.868 - 3.896: 99.3500% ( 1) 00:12:20.262 4.174 - 4.202: 99.3561% ( 1) 00:12:20.262 4.397 - 4.424: 99.3621% ( 1) 00:12:20.262 4.563 - 4.591: 99.3682% ( 1) 00:12:20.262 4.591 - 4.619: 99.3804% ( 2) 00:12:20.262 5.009 - 5.037: 99.3864% ( 1) 00:12:20.262 5.231 - 5.259: 99.3925% ( 1) 00:12:20.262 5.259 - 5.287: 99.4047% ( 2) 00:12:20.262 5.287 - 5.315: 99.4107% ( 1) 00:12:20.262 5.426 - 5.454: 99.4168% ( 1) 00:12:20.262 5.621 - 5.649: 99.4229% ( 1) 00:12:20.262 5.704 - 5.732: 99.4290% ( 1) 00:12:20.262 5.732 - 5.760: 99.4350% ( 1) 00:12:20.262 5.788 - 5.816: 99.4411% ( 1) 00:12:20.262 5.843 - 5.871: 99.4472% ( 1) 00:12:20.262 5.955 - 5.983: 99.4593% ( 2) 00:12:20.262 5.983 - 6.010: 99.4654% ( 1) 00:12:20.262 6.010 - 6.038: 99.4715% ( 1) 00:12:20.262 6.038 - 6.066: 99.4776% ( 1) 00:12:20.262 6.233 - 6.261: 99.4897% ( 2) 00:12:20.262 6.400 - 6.428: 99.4958% ( 1) 00:12:20.262 6.456 - 6.483: 99.5019% ( 1) 00:12:20.262 6.483 - 6.511: 99.5079% ( 1) 00:12:20.262 6.678 - 6.706: 99.5140% ( 1) 00:12:20.262 6.845 - 6.873: 99.5201% ( 1) 00:12:20.262 7.680 - 7.736: 99.5262% ( 1) 00:12:20.262 8.459 - 8.515: 99.5322% ( 1) 00:12:20.262 11.798 - 11.854: 99.5383% ( 1) 00:12:20.262 11.910 - 11.965: 99.5444% ( 1) 00:12:20.262 12.077 - 12.132: 99.5505% ( 1) 00:12:20.262 12.355 - 12.410: 99.5565% ( 1) 00:12:20.262 17.363 - 17.475: 99.5626% ( 1) 00:12:20.262 2008.821 - 2023.068: 99.5687% ( 1) 00:12:20.262 3989.148 - 4017.642: 99.9818% ( 68) 00:12:20.262 4986.435 - 5014.929: 99.9879% ( 1) 00:12:20.262 5983.722 - 6012.216: 100.0000% ( 2) 00:12:20.262 00:12:20.262 12:46:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:20.262 12:46:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:20.262 12:46:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:20.262 12:46:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:20.262 12:46:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.262 [ 00:12:20.262 { 00:12:20.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.262 "subtype": "Discovery", 00:12:20.262 "listen_addresses": [], 00:12:20.262 "allow_any_host": true, 00:12:20.262 "hosts": [] 00:12:20.262 }, 00:12:20.262 { 00:12:20.262 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.262 "subtype": "NVMe", 00:12:20.262 "listen_addresses": [ 00:12:20.262 { 00:12:20.262 "trtype": "VFIOUSER", 00:12:20.262 "adrfam": "IPv4", 00:12:20.262 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.262 "trsvcid": "0" 00:12:20.262 } 00:12:20.262 ], 00:12:20.262 "allow_any_host": true, 00:12:20.262 "hosts": [], 00:12:20.262 "serial_number": "SPDK1", 00:12:20.262 "model_number": "SPDK bdev Controller", 00:12:20.262 "max_namespaces": 32, 00:12:20.262 "min_cntlid": 1, 00:12:20.262 "max_cntlid": 65519, 00:12:20.262 "namespaces": [ 00:12:20.262 { 00:12:20.262 "nsid": 1, 00:12:20.262 "bdev_name": "Malloc1", 00:12:20.262 "name": "Malloc1", 00:12:20.262 "nguid": "FDC16E27FE2F41A99DAEC19CCE63EF93", 00:12:20.262 "uuid": "fdc16e27-fe2f-41a9-9dae-c19cce63ef93" 00:12:20.262 }, 00:12:20.262 { 00:12:20.262 "nsid": 2, 00:12:20.262 "bdev_name": "Malloc3", 00:12:20.262 "name": "Malloc3", 00:12:20.262 "nguid": "938D83FD210446199E587038651A9416", 00:12:20.262 "uuid": "938d83fd-2104-4619-9e58-7038651a9416" 00:12:20.262 } 00:12:20.262 ] 00:12:20.262 }, 00:12:20.262 { 00:12:20.262 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.262 "subtype": "NVMe", 00:12:20.262 "listen_addresses": [ 00:12:20.262 { 00:12:20.262 "trtype": "VFIOUSER", 00:12:20.262 "adrfam": "IPv4", 00:12:20.262 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.262 "trsvcid": "0" 00:12:20.262 } 00:12:20.262 ], 00:12:20.262 "allow_any_host": true, 00:12:20.262 "hosts": [], 00:12:20.262 "serial_number": "SPDK2", 00:12:20.262 "model_number": "SPDK bdev Controller", 00:12:20.262 "max_namespaces": 32, 00:12:20.262 "min_cntlid": 1, 00:12:20.262 "max_cntlid": 65519, 00:12:20.262 "namespaces": [ 00:12:20.262 { 00:12:20.262 "nsid": 1, 00:12:20.262 "bdev_name": "Malloc2", 00:12:20.262 "name": "Malloc2", 00:12:20.262 "nguid": "E8C56628714148539B102F8E2B81D541", 00:12:20.262 "uuid": "e8c56628-7141-4853-9b10-2f8e2b81d541" 00:12:20.262 } 00:12:20.262 ] 00:12:20.262 } 00:12:20.262 ] 00:12:20.262 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:20.262 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:20.262 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1644628 00:12:20.262 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:20.263 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:20.263 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.263 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.263 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:20.263 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:20.263 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:20.263 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.521 [2024-07-15 12:46:51.216268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.521 Malloc4 00:12:20.521 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:20.521 [2024-07-15 12:46:51.435962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.521 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.521 Asynchronous Event Request test 00:12:20.521 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.521 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.521 Registering asynchronous event callbacks... 00:12:20.521 Starting namespace attribute notice tests for all controllers... 00:12:20.521 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:20.521 aer_cb - Changed Namespace 00:12:20.521 Cleaning up... 00:12:20.779 [ 00:12:20.779 { 00:12:20.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.780 "subtype": "Discovery", 00:12:20.780 "listen_addresses": [], 00:12:20.780 "allow_any_host": true, 00:12:20.780 "hosts": [] 00:12:20.780 }, 00:12:20.780 { 00:12:20.780 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.780 "subtype": "NVMe", 00:12:20.780 "listen_addresses": [ 00:12:20.780 { 00:12:20.780 "trtype": "VFIOUSER", 00:12:20.780 "adrfam": "IPv4", 00:12:20.780 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.780 "trsvcid": "0" 00:12:20.780 } 00:12:20.780 ], 00:12:20.780 "allow_any_host": true, 00:12:20.780 "hosts": [], 00:12:20.780 "serial_number": "SPDK1", 00:12:20.780 "model_number": "SPDK bdev Controller", 00:12:20.780 "max_namespaces": 32, 00:12:20.780 "min_cntlid": 1, 00:12:20.780 "max_cntlid": 65519, 00:12:20.780 "namespaces": [ 00:12:20.780 { 00:12:20.780 "nsid": 1, 00:12:20.780 "bdev_name": "Malloc1", 00:12:20.780 "name": "Malloc1", 00:12:20.780 "nguid": "FDC16E27FE2F41A99DAEC19CCE63EF93", 00:12:20.780 "uuid": "fdc16e27-fe2f-41a9-9dae-c19cce63ef93" 00:12:20.780 }, 00:12:20.780 { 00:12:20.780 "nsid": 2, 00:12:20.780 "bdev_name": "Malloc3", 00:12:20.780 "name": "Malloc3", 00:12:20.780 "nguid": "938D83FD210446199E587038651A9416", 00:12:20.780 "uuid": "938d83fd-2104-4619-9e58-7038651a9416" 00:12:20.780 } 00:12:20.780 ] 00:12:20.780 }, 00:12:20.780 { 00:12:20.780 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.780 "subtype": "NVMe", 00:12:20.780 "listen_addresses": [ 00:12:20.780 { 00:12:20.780 "trtype": "VFIOUSER", 00:12:20.780 "adrfam": "IPv4", 00:12:20.780 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.780 "trsvcid": "0" 00:12:20.780 } 00:12:20.780 ], 00:12:20.780 "allow_any_host": true, 00:12:20.780 "hosts": [], 00:12:20.780 "serial_number": "SPDK2", 00:12:20.780 "model_number": "SPDK bdev Controller", 00:12:20.780 "max_namespaces": 32, 00:12:20.780 "min_cntlid": 1, 00:12:20.780 "max_cntlid": 65519, 00:12:20.780 "namespaces": [ 00:12:20.780 { 00:12:20.780 "nsid": 1, 00:12:20.780 "bdev_name": "Malloc2", 00:12:20.780 "name": "Malloc2", 00:12:20.780 "nguid": "E8C56628714148539B102F8E2B81D541", 00:12:20.780 "uuid": "e8c56628-7141-4853-9b10-2f8e2b81d541" 00:12:20.780 }, 00:12:20.780 { 00:12:20.780 "nsid": 2, 00:12:20.780 "bdev_name": "Malloc4", 00:12:20.780 "name": "Malloc4", 00:12:20.780 "nguid": "A1944CF4B2A14956B0C8A053C3897356", 00:12:20.780 "uuid": "a1944cf4-b2a1-4956-b0c8-a053c3897356" 00:12:20.780 } 00:12:20.780 ] 00:12:20.780 } 00:12:20.780 ] 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1644628 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1636560 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1636560 ']' 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1636560 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1636560 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1636560' 00:12:20.780 killing process with pid 1636560 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1636560 00:12:20.780 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1636560 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1644654 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1644654' 00:12:21.038 Process pid: 1644654 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1644654 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1644654 ']' 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.038 12:46:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:21.295 [2024-07-15 12:46:51.996248] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:21.295 [2024-07-15 12:46:51.997136] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:21.296 [2024-07-15 12:46:51.997172] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.296 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.296 [2024-07-15 12:46:52.063698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.296 [2024-07-15 12:46:52.143470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.296 [2024-07-15 12:46:52.143512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.296 [2024-07-15 12:46:52.143519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.296 [2024-07-15 12:46:52.143525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.296 [2024-07-15 12:46:52.143529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.296 [2024-07-15 12:46:52.143592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.296 [2024-07-15 12:46:52.143677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.296 [2024-07-15 12:46:52.143786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.296 [2024-07-15 12:46:52.143788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.296 [2024-07-15 12:46:52.229412] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:21.296 [2024-07-15 12:46:52.229495] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:21.296 [2024-07-15 12:46:52.230078] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:21.296 [2024-07-15 12:46:52.230502] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:21.296 [2024-07-15 12:46:52.230511] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:21.861 12:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.861 12:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:21.861 12:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:23.236 12:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:23.236 12:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:23.236 12:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:23.236 12:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.236 12:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:23.236 12:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.236 Malloc1 00:12:23.495 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:23.495 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:23.785 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:24.101 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.101 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:24.101 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.101 Malloc2 00:12:24.101 12:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:24.359 12:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1644654 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1644654 ']' 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1644654 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.618 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1644654 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1644654' 00:12:24.877 killing process with pid 1644654 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1644654 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1644654 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:24.877 12:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:24.877 00:12:24.877 real 0m52.536s 00:12:24.878 user 3m27.742s 00:12:24.878 sys 0m3.638s 00:12:24.878 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.878 12:46:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:24.878 ************************************ 00:12:24.878 END TEST nvmf_vfio_user 00:12:24.878 ************************************ 00:12:25.137 12:46:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:25.137 12:46:55 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.137 12:46:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.137 12:46:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.137 12:46:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.137 ************************************ 00:12:25.137 START TEST nvmf_vfio_user_nvme_compliance 00:12:25.137 ************************************ 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.137 * Looking for test storage... 00:12:25.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.137 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1645416 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1645416' 00:12:25.138 Process pid: 1645416 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1645416 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1645416 ']' 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.138 12:46:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.138 [2024-07-15 12:46:56.036184] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:25.138 [2024-07-15 12:46:56.036243] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.138 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.397 [2024-07-15 12:46:56.103368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.397 [2024-07-15 12:46:56.183023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.397 [2024-07-15 12:46:56.183058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.397 [2024-07-15 12:46:56.183065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.397 [2024-07-15 12:46:56.183072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.397 [2024-07-15 12:46:56.183077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.397 [2024-07-15 12:46:56.183129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.397 [2024-07-15 12:46:56.183253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.397 [2024-07-15 12:46:56.183254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.964 12:46:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.964 12:46:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:25.964 12:46:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:26.901 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:26.901 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:26.901 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:26.901 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.901 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.160 malloc0 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.160 12:46:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:27.160 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.160 00:12:27.160 00:12:27.160 CUnit - A unit testing framework for C - Version 2.1-3 00:12:27.160 http://cunit.sourceforge.net/ 00:12:27.160 00:12:27.160 00:12:27.160 Suite: nvme_compliance 00:12:27.160 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 12:46:58.074842] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.160 [2024-07-15 12:46:58.076179] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:27.160 [2024-07-15 12:46:58.076194] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:27.160 [2024-07-15 12:46:58.076199] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:27.160 [2024-07-15 12:46:58.077868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.160 passed 00:12:27.419 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 12:46:58.158399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.419 [2024-07-15 12:46:58.161413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.419 passed 00:12:27.419 Test: admin_identify_ns ...[2024-07-15 12:46:58.240240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.419 [2024-07-15 12:46:58.302233] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:27.419 [2024-07-15 12:46:58.310246] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:27.419 [2024-07-15 12:46:58.331336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.419 passed 00:12:27.678 Test: admin_get_features_mandatory_features ...[2024-07-15 12:46:58.407328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.678 [2024-07-15 12:46:58.410346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.678 passed 00:12:27.678 Test: admin_get_features_optional_features ...[2024-07-15 12:46:58.488851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.678 [2024-07-15 12:46:58.491870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.678 passed 00:12:27.678 Test: admin_set_features_number_of_queues ...[2024-07-15 12:46:58.570672] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.938 [2024-07-15 12:46:58.676307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.938 passed 00:12:27.938 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 12:46:58.753282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.938 [2024-07-15 12:46:58.756308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.938 passed 00:12:27.938 Test: admin_get_log_page_with_lpo ...[2024-07-15 12:46:58.832053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.197 [2024-07-15 12:46:58.903233] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:28.197 [2024-07-15 12:46:58.916294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.197 passed 00:12:28.197 Test: fabric_property_get ...[2024-07-15 12:46:58.992335] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.197 [2024-07-15 12:46:58.993568] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:28.197 [2024-07-15 12:46:58.995353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.197 passed 00:12:28.197 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 12:46:59.073873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.197 [2024-07-15 12:46:59.077449] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:28.197 [2024-07-15 12:46:59.078905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.197 passed 00:12:28.456 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 12:46:59.155665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.456 [2024-07-15 12:46:59.267235] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.456 [2024-07-15 12:46:59.283233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.456 [2024-07-15 12:46:59.288328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.456 passed 00:12:28.456 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 12:46:59.361527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.456 [2024-07-15 12:46:59.362768] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:28.456 [2024-07-15 12:46:59.364550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.456 passed 00:12:28.714 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 12:46:59.444615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.714 [2024-07-15 12:46:59.521235] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:28.714 [2024-07-15 12:46:59.545230] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.714 [2024-07-15 12:46:59.550310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.714 passed 00:12:28.714 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 12:46:59.624297] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.714 [2024-07-15 12:46:59.625525] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:28.714 [2024-07-15 12:46:59.625547] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:28.714 [2024-07-15 12:46:59.628322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.714 passed 00:12:28.973 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 12:46:59.706585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.973 [2024-07-15 12:46:59.799238] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:28.973 [2024-07-15 12:46:59.807237] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:28.973 [2024-07-15 12:46:59.812238] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:28.973 [2024-07-15 12:46:59.823241] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:28.974 [2024-07-15 12:46:59.852311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.974 passed 00:12:29.232 Test: admin_create_io_sq_verify_pc ...[2024-07-15 12:46:59.929286] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.232 [2024-07-15 12:46:59.946238] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:29.232 [2024-07-15 12:46:59.963479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.232 passed 00:12:29.232 Test: admin_create_io_qp_max_qps ...[2024-07-15 12:47:00.043012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.621 [2024-07-15 12:47:01.137235] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:30.621 [2024-07-15 12:47:01.518931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.621 passed 00:12:30.879 Test: admin_create_io_sq_shared_cq ...[2024-07-15 12:47:01.597203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.879 [2024-07-15 12:47:01.730232] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:30.879 [2024-07-15 12:47:01.767319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.879 passed 00:12:30.879 00:12:30.879 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.879 suites 1 1 n/a 0 0 00:12:30.879 tests 18 18 18 0 0 00:12:30.879 asserts 360 360 360 0 n/a 00:12:30.879 00:12:30.879 Elapsed time = 1.518 seconds 00:12:30.879 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1645416 00:12:30.879 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1645416 ']' 00:12:30.879 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1645416 00:12:30.879 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:30.879 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.879 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1645416 00:12:31.137 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.137 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.137 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1645416' 00:12:31.137 killing process with pid 1645416 00:12:31.137 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1645416 00:12:31.137 12:47:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1645416 00:12:31.137 12:47:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:31.137 12:47:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:31.137 00:12:31.137 real 0m6.192s 00:12:31.137 user 0m17.650s 00:12:31.137 sys 0m0.471s 00:12:31.137 12:47:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.137 12:47:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.137 ************************************ 00:12:31.137 END TEST nvmf_vfio_user_nvme_compliance 00:12:31.137 ************************************ 00:12:31.397 12:47:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.397 12:47:02 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.397 12:47:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.397 12:47:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.397 12:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.397 ************************************ 00:12:31.397 START TEST nvmf_vfio_user_fuzz 00:12:31.397 ************************************ 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.397 * Looking for test storage... 00:12:31.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:31.397 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1646615 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1646615' 00:12:31.398 Process pid: 1646615 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1646615 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1646615 ']' 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.398 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:31.657 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.657 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:31.657 12:47:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.594 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.853 malloc0 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:32.853 12:47:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:04.933 Fuzzing completed. Shutting down the fuzz application 00:13:04.933 00:13:04.933 Dumping successful admin opcodes: 00:13:04.933 8, 9, 10, 24, 00:13:04.933 Dumping successful io opcodes: 00:13:04.933 0, 00:13:04.933 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1040893, total successful commands: 4106, random_seed: 1776865152 00:13:04.933 NS: 0x200003a1ef00 admin qp, Total commands completed: 257754, total successful commands: 2081, random_seed: 2882224448 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1646615 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1646615 ']' 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1646615 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1646615 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1646615' 00:13:04.933 killing process with pid 1646615 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1646615 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1646615 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:04.933 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:04.933 00:13:04.933 real 0m32.233s 00:13:04.933 user 0m30.968s 00:13:04.933 sys 0m31.026s 00:13:04.934 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.934 12:47:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.934 ************************************ 00:13:04.934 END TEST nvmf_vfio_user_fuzz 00:13:04.934 ************************************ 00:13:04.934 12:47:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:04.934 12:47:34 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:04.934 12:47:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.934 12:47:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.934 12:47:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.934 ************************************ 00:13:04.934 START TEST nvmf_host_management 00:13:04.934 ************************************ 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:04.934 * Looking for test storage... 00:13:04.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.934 12:47:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:09.190 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:09.190 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.190 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:09.191 Found net devices under 0000:86:00.0: cvl_0_0 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:09.191 Found net devices under 0000:86:00.1: cvl_0_1 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.191 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:13:09.450 00:13:09.450 --- 10.0.0.2 ping statistics --- 00:13:09.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.450 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:13:09.450 00:13:09.450 --- 10.0.0.1 ping statistics --- 00:13:09.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.450 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1654913 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1654913 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1654913 ']' 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.450 12:47:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.450 [2024-07-15 12:47:40.373857] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:09.450 [2024-07-15 12:47:40.373905] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.450 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.709 [2024-07-15 12:47:40.447232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.709 [2024-07-15 12:47:40.526331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.709 [2024-07-15 12:47:40.526370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.709 [2024-07-15 12:47:40.526377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.709 [2024-07-15 12:47:40.526383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.709 [2024-07-15 12:47:40.526389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.709 [2024-07-15 12:47:40.526501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.709 [2024-07-15 12:47:40.526609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.709 [2024-07-15 12:47:40.526640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.709 [2024-07-15 12:47:40.526641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.277 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.277 [2024-07-15 12:47:41.230257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.536 Malloc0 00:13:10.536 [2024-07-15 12:47:41.290162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1655181 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1655181 /var/tmp/bdevperf.sock 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1655181 ']' 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.536 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:10.537 { 00:13:10.537 "params": { 00:13:10.537 "name": "Nvme$subsystem", 00:13:10.537 "trtype": "$TEST_TRANSPORT", 00:13:10.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:10.537 "adrfam": "ipv4", 00:13:10.537 "trsvcid": "$NVMF_PORT", 00:13:10.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:10.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:10.537 "hdgst": ${hdgst:-false}, 00:13:10.537 "ddgst": ${ddgst:-false} 00:13:10.537 }, 00:13:10.537 "method": "bdev_nvme_attach_controller" 00:13:10.537 } 00:13:10.537 EOF 00:13:10.537 )") 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:10.537 12:47:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:10.537 "params": { 00:13:10.537 "name": "Nvme0", 00:13:10.537 "trtype": "tcp", 00:13:10.537 "traddr": "10.0.0.2", 00:13:10.537 "adrfam": "ipv4", 00:13:10.537 "trsvcid": "4420", 00:13:10.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:10.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:10.537 "hdgst": false, 00:13:10.537 "ddgst": false 00:13:10.537 }, 00:13:10.537 "method": "bdev_nvme_attach_controller" 00:13:10.537 }' 00:13:10.537 [2024-07-15 12:47:41.382493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:10.537 [2024-07-15 12:47:41.382539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655181 ] 00:13:10.537 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.537 [2024-07-15 12:47:41.449733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.796 [2024-07-15 12:47:41.523892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.796 Running I/O for 10 seconds... 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1036 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1036 -ge 100 ']' 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.365 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.365 [2024-07-15 12:47:42.273405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.273576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214f460 is same with the state(5) to be set 00:13:11.365 [2024-07-15 12:47:42.276934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.365 [2024-07-15 12:47:42.276974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.365 [2024-07-15 12:47:42.276985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.365 [2024-07-15 12:47:42.276992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.365 [2024-07-15 12:47:42.277001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.365 [2024-07-15 12:47:42.277008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.366 [2024-07-15 12:47:42.277024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dd980 is same with the state(5) to be set 00:13:11.366 [2024-07-15 12:47:42.277452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.277988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.277997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.366 [2024-07-15 12:47:42.278121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.366 [2024-07-15 12:47:42.278131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.367 [2024-07-15 12:47:42.278350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.367 [2024-07-15 12:47:42.278622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.367 [2024-07-15 12:47:42.278696] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbeeb20 was disconnected and freed. reset controller. 00:13:11.367 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:11.367 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.367 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.367 [2024-07-15 12:47:42.279604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:11.367 task offset: 17536 on job bdev=Nvme0n1 fails 00:13:11.367 00:13:11.367 Latency(us) 00:13:11.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.367 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:11.367 Job: Nvme0n1 ended in about 0.59 seconds with error 00:13:11.367 Verification LBA range: start 0x0 length 0x400 00:13:11.367 Nvme0n1 : 0.59 1937.84 121.12 107.66 0.00 30630.25 2108.55 27582.11 00:13:11.367 =================================================================================================================== 00:13:11.367 Total : 1937.84 121.12 107.66 0.00 30630.25 2108.55 27582.11 00:13:11.368 [2024-07-15 12:47:42.281185] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:11.368 [2024-07-15 12:47:42.281202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dd980 (9): Bad file descriptor 00:13:11.368 [2024-07-15 12:47:42.283074] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:11.368 [2024-07-15 12:47:42.283154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:11.368 [2024-07-15 12:47:42.283179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.368 [2024-07-15 12:47:42.283197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:11.368 [2024-07-15 12:47:42.283206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:11.368 [2024-07-15 12:47:42.283214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:11.368 [2024-07-15 12:47:42.283221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7dd980 00:13:11.368 [2024-07-15 12:47:42.283246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dd980 (9): Bad file descriptor 00:13:11.368 [2024-07-15 12:47:42.283258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:11.368 [2024-07-15 12:47:42.283266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:11.368 [2024-07-15 12:47:42.283275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:11.368 [2024-07-15 12:47:42.283287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:11.368 12:47:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.368 12:47:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1655181 00:13:12.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1655181) - No such process 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:12.744 { 00:13:12.744 "params": { 00:13:12.744 "name": "Nvme$subsystem", 00:13:12.744 "trtype": "$TEST_TRANSPORT", 00:13:12.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:12.744 "adrfam": "ipv4", 00:13:12.744 "trsvcid": "$NVMF_PORT", 00:13:12.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:12.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:12.744 "hdgst": ${hdgst:-false}, 00:13:12.744 "ddgst": ${ddgst:-false} 00:13:12.744 }, 00:13:12.744 "method": "bdev_nvme_attach_controller" 00:13:12.744 } 00:13:12.744 EOF 00:13:12.744 )") 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:12.744 12:47:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:12.744 "params": { 00:13:12.744 "name": "Nvme0", 00:13:12.744 "trtype": "tcp", 00:13:12.744 "traddr": "10.0.0.2", 00:13:12.744 "adrfam": "ipv4", 00:13:12.744 "trsvcid": "4420", 00:13:12.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:12.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:12.744 "hdgst": false, 00:13:12.744 "ddgst": false 00:13:12.744 }, 00:13:12.744 "method": "bdev_nvme_attach_controller" 00:13:12.744 }' 00:13:12.744 [2024-07-15 12:47:43.343048] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:12.744 [2024-07-15 12:47:43.343095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1655434 ] 00:13:12.744 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.744 [2024-07-15 12:47:43.410656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.744 [2024-07-15 12:47:43.481694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.002 Running I/O for 1 seconds... 00:13:13.937 00:13:13.937 Latency(us) 00:13:13.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.937 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:13.937 Verification LBA range: start 0x0 length 0x400 00:13:13.937 Nvme0n1 : 1.03 1927.56 120.47 0.00 0.00 32689.45 6382.64 27126.21 00:13:13.937 =================================================================================================================== 00:13:13.937 Total : 1927.56 120.47 0.00 0.00 32689.45 6382.64 27126.21 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.197 12:47:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.197 rmmod nvme_tcp 00:13:14.197 rmmod nvme_fabrics 00:13:14.197 rmmod nvme_keyring 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1654913 ']' 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1654913 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1654913 ']' 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1654913 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1654913 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1654913' 00:13:14.197 killing process with pid 1654913 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1654913 00:13:14.197 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1654913 00:13:14.455 [2024-07-15 12:47:45.273641] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.455 12:47:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.989 12:47:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.989 12:47:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:16.989 00:13:16.989 real 0m12.934s 00:13:16.989 user 0m23.175s 00:13:16.989 sys 0m5.505s 00:13:16.989 12:47:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.989 12:47:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 ************************************ 00:13:16.989 END TEST nvmf_host_management 00:13:16.989 ************************************ 00:13:16.989 12:47:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:16.989 12:47:47 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.989 12:47:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:16.989 12:47:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.989 12:47:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.989 ************************************ 00:13:16.989 START TEST nvmf_lvol 00:13:16.989 ************************************ 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.989 * Looking for test storage... 00:13:16.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.989 12:47:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:22.274 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:22.274 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:22.274 Found net devices under 0000:86:00.0: cvl_0_0 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:22.274 Found net devices under 0000:86:00.1: cvl_0_1 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.274 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:13:22.533 00:13:22.533 --- 10.0.0.2 ping statistics --- 00:13:22.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.533 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:22.533 00:13:22.533 --- 10.0.0.1 ping statistics --- 00:13:22.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.533 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1659193 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1659193 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1659193 ']' 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.533 12:47:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.533 [2024-07-15 12:47:53.370695] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:22.533 [2024-07-15 12:47:53.370739] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.534 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.534 [2024-07-15 12:47:53.441655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.793 [2024-07-15 12:47:53.519913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.793 [2024-07-15 12:47:53.519955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.793 [2024-07-15 12:47:53.519963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.793 [2024-07-15 12:47:53.519969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.793 [2024-07-15 12:47:53.519975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.793 [2024-07-15 12:47:53.520036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.793 [2024-07-15 12:47:53.520140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.793 [2024-07-15 12:47:53.520142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.359 12:47:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.359 12:47:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:23.360 12:47:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.360 12:47:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.360 12:47:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:23.360 12:47:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.360 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:23.617 [2024-07-15 12:47:54.365840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.617 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.875 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:23.875 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.875 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:23.875 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:24.132 12:47:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:24.390 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0cc0fc14-7013-4d99-9d11-dc61f0eaa7a8 00:13:24.390 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0cc0fc14-7013-4d99-9d11-dc61f0eaa7a8 lvol 20 00:13:24.650 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8f9dcf9f-e08a-4321-925a-254ff4ce6fb6 00:13:24.650 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:24.650 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8f9dcf9f-e08a-4321-925a-254ff4ce6fb6 00:13:24.908 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:25.167 [2024-07-15 12:47:55.894894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.167 12:47:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.167 12:47:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1659689 00:13:25.167 12:47:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:25.167 12:47:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:25.430 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.437 12:47:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8f9dcf9f-e08a-4321-925a-254ff4ce6fb6 MY_SNAPSHOT 00:13:26.437 12:47:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e9983c84-76e5-4d50-afe9-e2a59fb0cb2d 00:13:26.437 12:47:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8f9dcf9f-e08a-4321-925a-254ff4ce6fb6 30 00:13:26.696 12:47:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e9983c84-76e5-4d50-afe9-e2a59fb0cb2d MY_CLONE 00:13:26.954 12:47:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3323853b-7c68-44a5-9330-a7c7039765ce 00:13:26.954 12:47:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3323853b-7c68-44a5-9330-a7c7039765ce 00:13:27.522 12:47:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1659689 00:13:35.637 Initializing NVMe Controllers 00:13:35.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:35.637 Controller IO queue size 128, less than required. 00:13:35.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:35.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:35.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:35.637 Initialization complete. Launching workers. 00:13:35.637 ======================================================== 00:13:35.637 Latency(us) 00:13:35.637 Device Information : IOPS MiB/s Average min max 00:13:35.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12145.60 47.44 10542.52 1253.23 55747.11 00:13:35.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11967.60 46.75 10697.06 3645.35 54819.70 00:13:35.637 ======================================================== 00:13:35.637 Total : 24113.20 94.19 10619.22 1253.23 55747.11 00:13:35.637 00:13:35.637 12:48:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:35.913 12:48:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8f9dcf9f-e08a-4321-925a-254ff4ce6fb6 00:13:36.172 12:48:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cc0fc14-7013-4d99-9d11-dc61f0eaa7a8 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.172 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.172 rmmod nvme_tcp 00:13:36.432 rmmod nvme_fabrics 00:13:36.432 rmmod nvme_keyring 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1659193 ']' 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1659193 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1659193 ']' 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1659193 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1659193 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1659193' 00:13:36.432 killing process with pid 1659193 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1659193 00:13:36.432 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1659193 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.692 12:48:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.601 12:48:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.601 00:13:38.601 real 0m22.094s 00:13:38.601 user 1m4.492s 00:13:38.601 sys 0m7.039s 00:13:38.601 12:48:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.601 12:48:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:38.601 ************************************ 00:13:38.601 END TEST nvmf_lvol 00:13:38.601 ************************************ 00:13:38.861 12:48:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.861 12:48:09 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:38.861 12:48:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.861 12:48:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.861 12:48:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.861 ************************************ 00:13:38.861 START TEST nvmf_lvs_grow 00:13:38.861 ************************************ 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:38.861 * Looking for test storage... 00:13:38.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.861 12:48:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:45.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:45.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:45.434 Found net devices under 0000:86:00.0: cvl_0_0 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:45.434 Found net devices under 0000:86:00.1: cvl_0_1 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:45.434 00:13:45.434 --- 10.0.0.2 ping statistics --- 00:13:45.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.434 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:45.434 00:13:45.434 --- 10.0.0.1 ping statistics --- 00:13:45.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.434 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.434 12:48:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1665563 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1665563 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1665563 ']' 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.435 12:48:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.435 [2024-07-15 12:48:15.492332] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:45.435 [2024-07-15 12:48:15.492372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.435 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.435 [2024-07-15 12:48:15.564163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.435 [2024-07-15 12:48:15.635229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.435 [2024-07-15 12:48:15.635269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.435 [2024-07-15 12:48:15.635276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.435 [2024-07-15 12:48:15.635282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.435 [2024-07-15 12:48:15.635287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.435 [2024-07-15 12:48:15.635305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.435 12:48:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:45.693 [2024-07-15 12:48:16.490580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:45.693 ************************************ 00:13:45.693 START TEST lvs_grow_clean 00:13:45.693 ************************************ 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.693 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.952 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:45.952 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:46.210 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:46.210 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:46.210 12:48:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:46.210 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:46.210 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:46.210 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 lvol 150 00:13:46.469 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d2be134-9c9a-469e-bd5b-1142c09980d3 00:13:46.469 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:46.469 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:46.728 [2024-07-15 12:48:17.445993] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:46.728 [2024-07-15 12:48:17.446043] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:46.728 true 00:13:46.729 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:46.729 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:46.729 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:46.729 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:46.988 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d2be134-9c9a-469e-bd5b-1142c09980d3 00:13:47.247 12:48:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:47.247 [2024-07-15 12:48:18.107964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.247 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1666063 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1666063 /var/tmp/bdevperf.sock 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1666063 ']' 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.506 12:48:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:47.506 [2024-07-15 12:48:18.336553] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:47.506 [2024-07-15 12:48:18.336605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666063 ] 00:13:47.506 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.506 [2024-07-15 12:48:18.404939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.765 [2024-07-15 12:48:18.483848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.332 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.332 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:48.332 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:48.592 Nvme0n1 00:13:48.592 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:48.592 [ 00:13:48.592 { 00:13:48.592 "name": "Nvme0n1", 00:13:48.592 "aliases": [ 00:13:48.592 "0d2be134-9c9a-469e-bd5b-1142c09980d3" 00:13:48.592 ], 00:13:48.592 "product_name": "NVMe disk", 00:13:48.592 "block_size": 4096, 00:13:48.592 "num_blocks": 38912, 00:13:48.592 "uuid": "0d2be134-9c9a-469e-bd5b-1142c09980d3", 00:13:48.592 "assigned_rate_limits": { 00:13:48.592 "rw_ios_per_sec": 0, 00:13:48.592 "rw_mbytes_per_sec": 0, 00:13:48.592 "r_mbytes_per_sec": 0, 00:13:48.592 "w_mbytes_per_sec": 0 00:13:48.592 }, 00:13:48.592 "claimed": false, 00:13:48.592 "zoned": false, 00:13:48.592 "supported_io_types": { 00:13:48.592 "read": true, 00:13:48.592 "write": true, 00:13:48.592 "unmap": true, 00:13:48.592 "flush": true, 00:13:48.592 "reset": true, 00:13:48.592 "nvme_admin": true, 00:13:48.592 "nvme_io": true, 00:13:48.592 "nvme_io_md": false, 00:13:48.592 "write_zeroes": true, 00:13:48.592 "zcopy": false, 00:13:48.592 "get_zone_info": false, 00:13:48.592 "zone_management": false, 00:13:48.592 "zone_append": false, 00:13:48.592 "compare": true, 00:13:48.592 "compare_and_write": true, 00:13:48.592 "abort": true, 00:13:48.592 "seek_hole": false, 00:13:48.592 "seek_data": false, 00:13:48.592 "copy": true, 00:13:48.592 "nvme_iov_md": false 00:13:48.592 }, 00:13:48.592 "memory_domains": [ 00:13:48.592 { 00:13:48.592 "dma_device_id": "system", 00:13:48.592 "dma_device_type": 1 00:13:48.592 } 00:13:48.592 ], 00:13:48.592 "driver_specific": { 00:13:48.592 "nvme": [ 00:13:48.592 { 00:13:48.592 "trid": { 00:13:48.592 "trtype": "TCP", 00:13:48.592 "adrfam": "IPv4", 00:13:48.592 "traddr": "10.0.0.2", 00:13:48.592 "trsvcid": "4420", 00:13:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:48.592 }, 00:13:48.592 "ctrlr_data": { 00:13:48.592 "cntlid": 1, 00:13:48.592 "vendor_id": "0x8086", 00:13:48.592 "model_number": "SPDK bdev Controller", 00:13:48.592 "serial_number": "SPDK0", 00:13:48.592 "firmware_revision": "24.09", 00:13:48.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:48.592 "oacs": { 00:13:48.592 "security": 0, 00:13:48.592 "format": 0, 00:13:48.592 "firmware": 0, 00:13:48.592 "ns_manage": 0 00:13:48.592 }, 00:13:48.592 "multi_ctrlr": true, 00:13:48.592 "ana_reporting": false 00:13:48.592 }, 00:13:48.592 "vs": { 00:13:48.592 "nvme_version": "1.3" 00:13:48.592 }, 00:13:48.592 "ns_data": { 00:13:48.592 "id": 1, 00:13:48.592 "can_share": true 00:13:48.592 } 00:13:48.592 } 00:13:48.592 ], 00:13:48.592 "mp_policy": "active_passive" 00:13:48.592 } 00:13:48.592 } 00:13:48.592 ] 00:13:48.851 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1666300 00:13:48.851 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:48.852 12:48:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:48.852 Running I/O for 10 seconds... 00:13:49.832 Latency(us) 00:13:49.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.832 Nvme0n1 : 1.00 23075.00 90.14 0.00 0.00 0.00 0.00 0.00 00:13:49.832 =================================================================================================================== 00:13:49.832 Total : 23075.00 90.14 0.00 0.00 0.00 0.00 0.00 00:13:49.832 00:13:50.769 12:48:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:50.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.769 Nvme0n1 : 2.00 23205.00 90.64 0.00 0.00 0.00 0.00 0.00 00:13:50.769 =================================================================================================================== 00:13:50.769 Total : 23205.00 90.64 0.00 0.00 0.00 0.00 0.00 00:13:50.769 00:13:51.027 true 00:13:51.027 12:48:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:51.027 12:48:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:51.027 12:48:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:51.027 12:48:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:51.027 12:48:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1666300 00:13:51.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.963 Nvme0n1 : 3.00 23280.33 90.94 0.00 0.00 0.00 0.00 0.00 00:13:51.963 =================================================================================================================== 00:13:51.963 Total : 23280.33 90.94 0.00 0.00 0.00 0.00 0.00 00:13:51.963 00:13:52.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.896 Nvme0n1 : 4.00 23397.75 91.40 0.00 0.00 0.00 0.00 0.00 00:13:52.896 =================================================================================================================== 00:13:52.896 Total : 23397.75 91.40 0.00 0.00 0.00 0.00 0.00 00:13:52.896 00:13:53.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.830 Nvme0n1 : 5.00 23436.80 91.55 0.00 0.00 0.00 0.00 0.00 00:13:53.830 =================================================================================================================== 00:13:53.830 Total : 23436.80 91.55 0.00 0.00 0.00 0.00 0.00 00:13:53.830 00:13:54.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.763 Nvme0n1 : 6.00 23500.00 91.80 0.00 0.00 0.00 0.00 0.00 00:13:54.763 =================================================================================================================== 00:13:54.763 Total : 23500.00 91.80 0.00 0.00 0.00 0.00 0.00 00:13:54.763 00:13:56.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.139 Nvme0n1 : 7.00 23538.86 91.95 0.00 0.00 0.00 0.00 0.00 00:13:56.139 =================================================================================================================== 00:13:56.139 Total : 23538.86 91.95 0.00 0.00 0.00 0.00 0.00 00:13:56.139 00:13:56.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.707 Nvme0n1 : 8.00 23561.12 92.04 0.00 0.00 0.00 0.00 0.00 00:13:56.707 =================================================================================================================== 00:13:56.707 Total : 23561.12 92.04 0.00 0.00 0.00 0.00 0.00 00:13:56.707 00:13:58.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.085 Nvme0n1 : 9.00 23589.56 92.15 0.00 0.00 0.00 0.00 0.00 00:13:58.085 =================================================================================================================== 00:13:58.085 Total : 23589.56 92.15 0.00 0.00 0.00 0.00 0.00 00:13:58.085 00:13:59.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.023 Nvme0n1 : 10.00 23618.50 92.26 0.00 0.00 0.00 0.00 0.00 00:13:59.023 =================================================================================================================== 00:13:59.023 Total : 23618.50 92.26 0.00 0.00 0.00 0.00 0.00 00:13:59.023 00:13:59.023 00:13:59.023 Latency(us) 00:13:59.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.023 Nvme0n1 : 10.00 23611.44 92.23 0.00 0.00 5417.46 1538.67 10485.76 00:13:59.023 =================================================================================================================== 00:13:59.023 Total : 23611.44 92.23 0.00 0.00 5417.46 1538.67 10485.76 00:13:59.023 0 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1666063 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1666063 ']' 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1666063 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666063 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666063' 00:13:59.023 killing process with pid 1666063 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1666063 00:13:59.023 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.023 00:13:59.023 Latency(us) 00:13:59.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.023 =================================================================================================================== 00:13:59.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1666063 00:13:59.023 12:48:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:59.287 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.547 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:59.547 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:59.547 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:59.547 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:59.547 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:59.807 [2024-07-15 12:48:30.642832] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:59.807 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:14:00.066 request: 00:14:00.066 { 00:14:00.066 "uuid": "e6e19e8c-9963-4a89-aa84-44effda9e8b8", 00:14:00.066 "method": "bdev_lvol_get_lvstores", 00:14:00.066 "req_id": 1 00:14:00.066 } 00:14:00.066 Got JSON-RPC error response 00:14:00.066 response: 00:14:00.066 { 00:14:00.066 "code": -19, 00:14:00.066 "message": "No such device" 00:14:00.066 } 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:00.066 aio_bdev 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d2be134-9c9a-469e-bd5b-1142c09980d3 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0d2be134-9c9a-469e-bd5b-1142c09980d3 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.066 12:48:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:00.324 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d2be134-9c9a-469e-bd5b-1142c09980d3 -t 2000 00:14:00.583 [ 00:14:00.584 { 00:14:00.584 "name": "0d2be134-9c9a-469e-bd5b-1142c09980d3", 00:14:00.584 "aliases": [ 00:14:00.584 "lvs/lvol" 00:14:00.584 ], 00:14:00.584 "product_name": "Logical Volume", 00:14:00.584 "block_size": 4096, 00:14:00.584 "num_blocks": 38912, 00:14:00.584 "uuid": "0d2be134-9c9a-469e-bd5b-1142c09980d3", 00:14:00.584 "assigned_rate_limits": { 00:14:00.584 "rw_ios_per_sec": 0, 00:14:00.584 "rw_mbytes_per_sec": 0, 00:14:00.584 "r_mbytes_per_sec": 0, 00:14:00.584 "w_mbytes_per_sec": 0 00:14:00.584 }, 00:14:00.584 "claimed": false, 00:14:00.584 "zoned": false, 00:14:00.584 "supported_io_types": { 00:14:00.584 "read": true, 00:14:00.584 "write": true, 00:14:00.584 "unmap": true, 00:14:00.584 "flush": false, 00:14:00.584 "reset": true, 00:14:00.584 "nvme_admin": false, 00:14:00.584 "nvme_io": false, 00:14:00.584 "nvme_io_md": false, 00:14:00.584 "write_zeroes": true, 00:14:00.584 "zcopy": false, 00:14:00.584 "get_zone_info": false, 00:14:00.584 "zone_management": false, 00:14:00.584 "zone_append": false, 00:14:00.584 "compare": false, 00:14:00.584 "compare_and_write": false, 00:14:00.584 "abort": false, 00:14:00.584 "seek_hole": true, 00:14:00.584 "seek_data": true, 00:14:00.584 "copy": false, 00:14:00.584 "nvme_iov_md": false 00:14:00.584 }, 00:14:00.584 "driver_specific": { 00:14:00.584 "lvol": { 00:14:00.584 "lvol_store_uuid": "e6e19e8c-9963-4a89-aa84-44effda9e8b8", 00:14:00.584 "base_bdev": "aio_bdev", 00:14:00.584 "thin_provision": false, 00:14:00.584 "num_allocated_clusters": 38, 00:14:00.584 "snapshot": false, 00:14:00.584 "clone": false, 00:14:00.584 "esnap_clone": false 00:14:00.584 } 00:14:00.584 } 00:14:00.584 } 00:14:00.584 ] 00:14:00.584 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:00.584 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:14:00.584 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:00.584 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:00.584 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:00.584 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:14:00.843 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:00.843 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d2be134-9c9a-469e-bd5b-1142c09980d3 00:14:01.102 12:48:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6e19e8c-9963-4a89-aa84-44effda9e8b8 00:14:01.102 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:01.360 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.360 00:14:01.360 real 0m15.663s 00:14:01.360 user 0m15.414s 00:14:01.360 sys 0m1.374s 00:14:01.360 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.360 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:01.360 ************************************ 00:14:01.360 END TEST lvs_grow_clean 00:14:01.361 ************************************ 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:01.361 ************************************ 00:14:01.361 START TEST lvs_grow_dirty 00:14:01.361 ************************************ 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.361 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:01.619 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:01.619 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:01.877 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:01.877 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:01.877 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:02.135 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:02.135 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:02.136 12:48:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 lvol 150 00:14:02.136 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:02.136 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:02.136 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:02.394 [2024-07-15 12:48:33.181220] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:02.394 [2024-07-15 12:48:33.181273] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:02.394 true 00:14:02.394 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:02.394 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:02.653 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:02.653 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:02.653 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:02.912 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:02.912 [2024-07-15 12:48:33.859248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.169 12:48:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1668681 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1668681 /var/tmp/bdevperf.sock 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1668681 ']' 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.169 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:03.169 [2024-07-15 12:48:34.074296] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:03.169 [2024-07-15 12:48:34.074345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668681 ] 00:14:03.169 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.426 [2024-07-15 12:48:34.142622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.426 [2024-07-15 12:48:34.220349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.991 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.991 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:03.991 12:48:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:04.558 Nvme0n1 00:14:04.558 12:48:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:04.558 [ 00:14:04.558 { 00:14:04.558 "name": "Nvme0n1", 00:14:04.558 "aliases": [ 00:14:04.558 "5f6df1c4-f896-4274-8fbf-6fb01ce7822b" 00:14:04.558 ], 00:14:04.558 "product_name": "NVMe disk", 00:14:04.558 "block_size": 4096, 00:14:04.558 "num_blocks": 38912, 00:14:04.558 "uuid": "5f6df1c4-f896-4274-8fbf-6fb01ce7822b", 00:14:04.558 "assigned_rate_limits": { 00:14:04.558 "rw_ios_per_sec": 0, 00:14:04.558 "rw_mbytes_per_sec": 0, 00:14:04.558 "r_mbytes_per_sec": 0, 00:14:04.558 "w_mbytes_per_sec": 0 00:14:04.558 }, 00:14:04.558 "claimed": false, 00:14:04.558 "zoned": false, 00:14:04.558 "supported_io_types": { 00:14:04.558 "read": true, 00:14:04.558 "write": true, 00:14:04.558 "unmap": true, 00:14:04.558 "flush": true, 00:14:04.558 "reset": true, 00:14:04.558 "nvme_admin": true, 00:14:04.558 "nvme_io": true, 00:14:04.558 "nvme_io_md": false, 00:14:04.558 "write_zeroes": true, 00:14:04.558 "zcopy": false, 00:14:04.558 "get_zone_info": false, 00:14:04.558 "zone_management": false, 00:14:04.558 "zone_append": false, 00:14:04.558 "compare": true, 00:14:04.558 "compare_and_write": true, 00:14:04.558 "abort": true, 00:14:04.558 "seek_hole": false, 00:14:04.558 "seek_data": false, 00:14:04.558 "copy": true, 00:14:04.558 "nvme_iov_md": false 00:14:04.558 }, 00:14:04.558 "memory_domains": [ 00:14:04.558 { 00:14:04.558 "dma_device_id": "system", 00:14:04.558 "dma_device_type": 1 00:14:04.558 } 00:14:04.558 ], 00:14:04.558 "driver_specific": { 00:14:04.558 "nvme": [ 00:14:04.558 { 00:14:04.558 "trid": { 00:14:04.558 "trtype": "TCP", 00:14:04.558 "adrfam": "IPv4", 00:14:04.558 "traddr": "10.0.0.2", 00:14:04.558 "trsvcid": "4420", 00:14:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:04.558 }, 00:14:04.558 "ctrlr_data": { 00:14:04.558 "cntlid": 1, 00:14:04.558 "vendor_id": "0x8086", 00:14:04.558 "model_number": "SPDK bdev Controller", 00:14:04.558 "serial_number": "SPDK0", 00:14:04.558 "firmware_revision": "24.09", 00:14:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:04.558 "oacs": { 00:14:04.558 "security": 0, 00:14:04.558 "format": 0, 00:14:04.558 "firmware": 0, 00:14:04.558 "ns_manage": 0 00:14:04.558 }, 00:14:04.558 "multi_ctrlr": true, 00:14:04.558 "ana_reporting": false 00:14:04.558 }, 00:14:04.558 "vs": { 00:14:04.558 "nvme_version": "1.3" 00:14:04.558 }, 00:14:04.558 "ns_data": { 00:14:04.558 "id": 1, 00:14:04.558 "can_share": true 00:14:04.558 } 00:14:04.558 } 00:14:04.558 ], 00:14:04.558 "mp_policy": "active_passive" 00:14:04.558 } 00:14:04.558 } 00:14:04.558 ] 00:14:04.558 12:48:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1668943 00:14:04.558 12:48:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:04.558 12:48:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:04.817 Running I/O for 10 seconds... 00:14:05.810 Latency(us) 00:14:05.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.810 Nvme0n1 : 1.00 23330.00 91.13 0.00 0.00 0.00 0.00 0.00 00:14:05.810 =================================================================================================================== 00:14:05.810 Total : 23330.00 91.13 0.00 0.00 0.00 0.00 0.00 00:14:05.810 00:14:06.745 12:48:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:06.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.745 Nvme0n1 : 2.00 23419.50 91.48 0.00 0.00 0.00 0.00 0.00 00:14:06.745 =================================================================================================================== 00:14:06.745 Total : 23419.50 91.48 0.00 0.00 0.00 0.00 0.00 00:14:06.745 00:14:06.745 true 00:14:06.745 12:48:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:06.745 12:48:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:07.003 12:48:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:07.003 12:48:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:07.003 12:48:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1668943 00:14:07.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.939 Nvme0n1 : 3.00 23386.67 91.35 0.00 0.00 0.00 0.00 0.00 00:14:07.939 =================================================================================================================== 00:14:07.939 Total : 23386.67 91.35 0.00 0.00 0.00 0.00 0.00 00:14:07.939 00:14:08.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.873 Nvme0n1 : 4.00 23440.75 91.57 0.00 0.00 0.00 0.00 0.00 00:14:08.873 =================================================================================================================== 00:14:08.873 Total : 23440.75 91.57 0.00 0.00 0.00 0.00 0.00 00:14:08.873 00:14:09.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.809 Nvme0n1 : 5.00 23412.80 91.46 0.00 0.00 0.00 0.00 0.00 00:14:09.809 =================================================================================================================== 00:14:09.809 Total : 23412.80 91.46 0.00 0.00 0.00 0.00 0.00 00:14:09.809 00:14:10.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.741 Nvme0n1 : 6.00 23433.50 91.54 0.00 0.00 0.00 0.00 0.00 00:14:10.741 =================================================================================================================== 00:14:10.741 Total : 23433.50 91.54 0.00 0.00 0.00 0.00 0.00 00:14:10.741 00:14:11.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.676 Nvme0n1 : 7.00 23455.71 91.62 0.00 0.00 0.00 0.00 0.00 00:14:11.676 =================================================================================================================== 00:14:11.676 Total : 23455.71 91.62 0.00 0.00 0.00 0.00 0.00 00:14:11.676 00:14:13.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.052 Nvme0n1 : 8.00 23476.12 91.70 0.00 0.00 0.00 0.00 0.00 00:14:13.052 =================================================================================================================== 00:14:13.052 Total : 23476.12 91.70 0.00 0.00 0.00 0.00 0.00 00:14:13.052 00:14:13.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.989 Nvme0n1 : 9.00 23490.56 91.76 0.00 0.00 0.00 0.00 0.00 00:14:13.989 =================================================================================================================== 00:14:13.989 Total : 23490.56 91.76 0.00 0.00 0.00 0.00 0.00 00:14:13.989 00:14:14.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.960 Nvme0n1 : 10.00 23499.10 91.79 0.00 0.00 0.00 0.00 0.00 00:14:14.960 =================================================================================================================== 00:14:14.960 Total : 23499.10 91.79 0.00 0.00 0.00 0.00 0.00 00:14:14.960 00:14:14.960 00:14:14.960 Latency(us) 00:14:14.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.960 Nvme0n1 : 10.00 23502.44 91.81 0.00 0.00 5443.35 1517.30 10428.77 00:14:14.960 =================================================================================================================== 00:14:14.960 Total : 23502.44 91.81 0.00 0.00 5443.35 1517.30 10428.77 00:14:14.960 0 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1668681 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1668681 ']' 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1668681 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668681 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668681' 00:14:14.960 killing process with pid 1668681 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1668681 00:14:14.960 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.960 00:14:14.960 Latency(us) 00:14:14.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.960 =================================================================================================================== 00:14:14.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1668681 00:14:14.960 12:48:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.219 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:15.479 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:15.479 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:15.479 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:15.479 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:15.479 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1665563 00:14:15.479 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1665563 00:14:15.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1665563 Killed "${NVMF_APP[@]}" "$@" 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1670741 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1670741 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1670741 ']' 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.740 12:48:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:15.740 [2024-07-15 12:48:46.497908] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:15.740 [2024-07-15 12:48:46.497954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.740 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.740 [2024-07-15 12:48:46.567789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.740 [2024-07-15 12:48:46.645760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.740 [2024-07-15 12:48:46.645797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.740 [2024-07-15 12:48:46.645804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.740 [2024-07-15 12:48:46.645809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.740 [2024-07-15 12:48:46.645814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.740 [2024-07-15 12:48:46.645832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:16.679 [2024-07-15 12:48:47.490012] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:16.679 [2024-07-15 12:48:47.490097] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:16.679 [2024-07-15 12:48:47.490121] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.679 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:16.938 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f6df1c4-f896-4274-8fbf-6fb01ce7822b -t 2000 00:14:16.938 [ 00:14:16.938 { 00:14:16.938 "name": "5f6df1c4-f896-4274-8fbf-6fb01ce7822b", 00:14:16.938 "aliases": [ 00:14:16.938 "lvs/lvol" 00:14:16.938 ], 00:14:16.938 "product_name": "Logical Volume", 00:14:16.938 "block_size": 4096, 00:14:16.938 "num_blocks": 38912, 00:14:16.938 "uuid": "5f6df1c4-f896-4274-8fbf-6fb01ce7822b", 00:14:16.938 "assigned_rate_limits": { 00:14:16.938 "rw_ios_per_sec": 0, 00:14:16.938 "rw_mbytes_per_sec": 0, 00:14:16.938 "r_mbytes_per_sec": 0, 00:14:16.939 "w_mbytes_per_sec": 0 00:14:16.939 }, 00:14:16.939 "claimed": false, 00:14:16.939 "zoned": false, 00:14:16.939 "supported_io_types": { 00:14:16.939 "read": true, 00:14:16.939 "write": true, 00:14:16.939 "unmap": true, 00:14:16.939 "flush": false, 00:14:16.939 "reset": true, 00:14:16.939 "nvme_admin": false, 00:14:16.939 "nvme_io": false, 00:14:16.939 "nvme_io_md": false, 00:14:16.939 "write_zeroes": true, 00:14:16.939 "zcopy": false, 00:14:16.939 "get_zone_info": false, 00:14:16.939 "zone_management": false, 00:14:16.939 "zone_append": false, 00:14:16.939 "compare": false, 00:14:16.939 "compare_and_write": false, 00:14:16.939 "abort": false, 00:14:16.939 "seek_hole": true, 00:14:16.939 "seek_data": true, 00:14:16.939 "copy": false, 00:14:16.939 "nvme_iov_md": false 00:14:16.939 }, 00:14:16.939 "driver_specific": { 00:14:16.939 "lvol": { 00:14:16.939 "lvol_store_uuid": "49dd3d0f-5dd7-4039-8c91-98082d2a0434", 00:14:16.939 "base_bdev": "aio_bdev", 00:14:16.939 "thin_provision": false, 00:14:16.939 "num_allocated_clusters": 38, 00:14:16.939 "snapshot": false, 00:14:16.939 "clone": false, 00:14:16.939 "esnap_clone": false 00:14:16.939 } 00:14:16.939 } 00:14:16.939 } 00:14:16.939 ] 00:14:16.939 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:16.939 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:16.939 12:48:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:17.198 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:17.198 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:17.198 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:17.458 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:17.458 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:17.458 [2024-07-15 12:48:48.374797] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:17.458 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:17.458 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:17.458 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:17.458 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:17.717 request: 00:14:17.717 { 00:14:17.717 "uuid": "49dd3d0f-5dd7-4039-8c91-98082d2a0434", 00:14:17.717 "method": "bdev_lvol_get_lvstores", 00:14:17.717 "req_id": 1 00:14:17.717 } 00:14:17.717 Got JSON-RPC error response 00:14:17.717 response: 00:14:17.717 { 00:14:17.717 "code": -19, 00:14:17.717 "message": "No such device" 00:14:17.717 } 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.717 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.976 aio_bdev 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:17.976 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:18.235 12:48:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f6df1c4-f896-4274-8fbf-6fb01ce7822b -t 2000 00:14:18.235 [ 00:14:18.235 { 00:14:18.235 "name": "5f6df1c4-f896-4274-8fbf-6fb01ce7822b", 00:14:18.235 "aliases": [ 00:14:18.235 "lvs/lvol" 00:14:18.235 ], 00:14:18.235 "product_name": "Logical Volume", 00:14:18.235 "block_size": 4096, 00:14:18.235 "num_blocks": 38912, 00:14:18.235 "uuid": "5f6df1c4-f896-4274-8fbf-6fb01ce7822b", 00:14:18.235 "assigned_rate_limits": { 00:14:18.235 "rw_ios_per_sec": 0, 00:14:18.235 "rw_mbytes_per_sec": 0, 00:14:18.235 "r_mbytes_per_sec": 0, 00:14:18.235 "w_mbytes_per_sec": 0 00:14:18.235 }, 00:14:18.235 "claimed": false, 00:14:18.235 "zoned": false, 00:14:18.235 "supported_io_types": { 00:14:18.235 "read": true, 00:14:18.235 "write": true, 00:14:18.235 "unmap": true, 00:14:18.235 "flush": false, 00:14:18.235 "reset": true, 00:14:18.235 "nvme_admin": false, 00:14:18.235 "nvme_io": false, 00:14:18.235 "nvme_io_md": false, 00:14:18.235 "write_zeroes": true, 00:14:18.235 "zcopy": false, 00:14:18.235 "get_zone_info": false, 00:14:18.235 "zone_management": false, 00:14:18.235 "zone_append": false, 00:14:18.235 "compare": false, 00:14:18.235 "compare_and_write": false, 00:14:18.235 "abort": false, 00:14:18.235 "seek_hole": true, 00:14:18.235 "seek_data": true, 00:14:18.235 "copy": false, 00:14:18.235 "nvme_iov_md": false 00:14:18.235 }, 00:14:18.235 "driver_specific": { 00:14:18.235 "lvol": { 00:14:18.235 "lvol_store_uuid": "49dd3d0f-5dd7-4039-8c91-98082d2a0434", 00:14:18.235 "base_bdev": "aio_bdev", 00:14:18.235 "thin_provision": false, 00:14:18.235 "num_allocated_clusters": 38, 00:14:18.235 "snapshot": false, 00:14:18.235 "clone": false, 00:14:18.235 "esnap_clone": false 00:14:18.235 } 00:14:18.235 } 00:14:18.235 } 00:14:18.235 ] 00:14:18.235 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:18.235 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:18.235 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:18.494 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:18.494 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:18.494 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:18.753 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:18.753 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f6df1c4-f896-4274-8fbf-6fb01ce7822b 00:14:18.753 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49dd3d0f-5dd7-4039-8c91-98082d2a0434 00:14:19.011 12:48:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.270 00:14:19.270 real 0m17.779s 00:14:19.270 user 0m45.452s 00:14:19.270 sys 0m3.697s 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:19.270 ************************************ 00:14:19.270 END TEST lvs_grow_dirty 00:14:19.270 ************************************ 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:19.270 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:19.271 nvmf_trace.0 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.271 rmmod nvme_tcp 00:14:19.271 rmmod nvme_fabrics 00:14:19.271 rmmod nvme_keyring 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1670741 ']' 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1670741 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1670741 ']' 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1670741 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.271 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1670741 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1670741' 00:14:19.529 killing process with pid 1670741 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1670741 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1670741 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.529 12:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.066 12:48:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.066 00:14:22.066 real 0m42.894s 00:14:22.066 user 1m6.824s 00:14:22.066 sys 0m9.790s 00:14:22.066 12:48:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.066 12:48:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:22.066 ************************************ 00:14:22.066 END TEST nvmf_lvs_grow 00:14:22.066 ************************************ 00:14:22.066 12:48:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:22.066 12:48:52 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:22.066 12:48:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:22.066 12:48:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.066 12:48:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.066 ************************************ 00:14:22.066 START TEST nvmf_bdev_io_wait 00:14:22.066 ************************************ 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:22.066 * Looking for test storage... 00:14:22.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.066 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.067 12:48:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:27.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:27.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:27.385 Found net devices under 0000:86:00.0: cvl_0_0 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.385 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:27.386 Found net devices under 0000:86:00.1: cvl_0_1 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.386 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:14:27.645 00:14:27.645 --- 10.0.0.2 ping statistics --- 00:14:27.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.645 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:14:27.645 00:14:27.645 --- 10.0.0.1 ping statistics --- 00:14:27.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.645 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1675005 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1675005 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1675005 ']' 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.645 12:48:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:27.645 [2024-07-15 12:48:58.505454] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:27.645 [2024-07-15 12:48:58.505496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.645 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.645 [2024-07-15 12:48:58.573338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.905 [2024-07-15 12:48:58.654683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.905 [2024-07-15 12:48:58.654719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.905 [2024-07-15 12:48:58.654728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.905 [2024-07-15 12:48:58.654733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.905 [2024-07-15 12:48:58.654738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.905 [2024-07-15 12:48:58.654780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.905 [2024-07-15 12:48:58.654814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.905 [2024-07-15 12:48:58.654920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.905 [2024-07-15 12:48:58.654921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.474 [2024-07-15 12:48:59.420920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.474 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.734 Malloc0 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.734 [2024-07-15 12:48:59.476997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1675163 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1675166 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:28.734 { 00:14:28.734 "params": { 00:14:28.734 "name": "Nvme$subsystem", 00:14:28.734 "trtype": "$TEST_TRANSPORT", 00:14:28.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:28.734 "adrfam": "ipv4", 00:14:28.734 "trsvcid": "$NVMF_PORT", 00:14:28.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:28.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:28.734 "hdgst": ${hdgst:-false}, 00:14:28.734 "ddgst": ${ddgst:-false} 00:14:28.734 }, 00:14:28.734 "method": "bdev_nvme_attach_controller" 00:14:28.734 } 00:14:28.734 EOF 00:14:28.734 )") 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1675169 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:28.734 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:28.734 { 00:14:28.734 "params": { 00:14:28.734 "name": "Nvme$subsystem", 00:14:28.735 "trtype": "$TEST_TRANSPORT", 00:14:28.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "$NVMF_PORT", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:28.735 "hdgst": ${hdgst:-false}, 00:14:28.735 "ddgst": ${ddgst:-false} 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 } 00:14:28.735 EOF 00:14:28.735 )") 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1675173 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:28.735 { 00:14:28.735 "params": { 00:14:28.735 "name": "Nvme$subsystem", 00:14:28.735 "trtype": "$TEST_TRANSPORT", 00:14:28.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "$NVMF_PORT", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:28.735 "hdgst": ${hdgst:-false}, 00:14:28.735 "ddgst": ${ddgst:-false} 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 } 00:14:28.735 EOF 00:14:28.735 )") 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:28.735 { 00:14:28.735 "params": { 00:14:28.735 "name": "Nvme$subsystem", 00:14:28.735 "trtype": "$TEST_TRANSPORT", 00:14:28.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "$NVMF_PORT", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:28.735 "hdgst": ${hdgst:-false}, 00:14:28.735 "ddgst": ${ddgst:-false} 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 } 00:14:28.735 EOF 00:14:28.735 )") 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1675163 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:28.735 "params": { 00:14:28.735 "name": "Nvme1", 00:14:28.735 "trtype": "tcp", 00:14:28.735 "traddr": "10.0.0.2", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "4420", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.735 "hdgst": false, 00:14:28.735 "ddgst": false 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 }' 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:28.735 "params": { 00:14:28.735 "name": "Nvme1", 00:14:28.735 "trtype": "tcp", 00:14:28.735 "traddr": "10.0.0.2", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "4420", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.735 "hdgst": false, 00:14:28.735 "ddgst": false 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 }' 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:28.735 "params": { 00:14:28.735 "name": "Nvme1", 00:14:28.735 "trtype": "tcp", 00:14:28.735 "traddr": "10.0.0.2", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "4420", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.735 "hdgst": false, 00:14:28.735 "ddgst": false 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 }' 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:28.735 12:48:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:28.735 "params": { 00:14:28.735 "name": "Nvme1", 00:14:28.735 "trtype": "tcp", 00:14:28.735 "traddr": "10.0.0.2", 00:14:28.735 "adrfam": "ipv4", 00:14:28.735 "trsvcid": "4420", 00:14:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.735 "hdgst": false, 00:14:28.735 "ddgst": false 00:14:28.735 }, 00:14:28.735 "method": "bdev_nvme_attach_controller" 00:14:28.735 }' 00:14:28.735 [2024-07-15 12:48:59.527426] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:28.735 [2024-07-15 12:48:59.527475] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:28.735 [2024-07-15 12:48:59.527466] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:28.735 [2024-07-15 12:48:59.527513] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:28.735 [2024-07-15 12:48:59.529029] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:28.735 [2024-07-15 12:48:59.529072] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:28.735 [2024-07-15 12:48:59.529165] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:28.735 [2024-07-15 12:48:59.529200] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:28.735 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.735 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.994 [2024-07-15 12:48:59.702355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.994 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.994 [2024-07-15 12:48:59.780255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:28.994 [2024-07-15 12:48:59.806329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.994 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.994 [2024-07-15 12:48:59.884567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:28.994 [2024-07-15 12:48:59.906183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.253 [2024-07-15 12:48:59.959688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.253 [2024-07-15 12:48:59.994473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:29.253 [2024-07-15 12:49:00.038151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:29.253 Running I/O for 1 seconds... 00:14:29.253 Running I/O for 1 seconds... 00:14:29.253 Running I/O for 1 seconds... 00:14:29.511 Running I/O for 1 seconds... 00:14:30.445 00:14:30.446 Latency(us) 00:14:30.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.446 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:30.446 Nvme1n1 : 1.01 7814.96 30.53 0.00 0.00 16247.59 6610.59 27468.13 00:14:30.446 =================================================================================================================== 00:14:30.446 Total : 7814.96 30.53 0.00 0.00 16247.59 6610.59 27468.13 00:14:30.446 00:14:30.446 Latency(us) 00:14:30.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.446 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:30.446 Nvme1n1 : 1.00 245754.56 959.98 0.00 0.00 518.56 206.58 662.48 00:14:30.446 =================================================================================================================== 00:14:30.446 Total : 245754.56 959.98 0.00 0.00 518.56 206.58 662.48 00:14:30.446 00:14:30.446 Latency(us) 00:14:30.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.446 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:30.446 Nvme1n1 : 1.01 7390.81 28.87 0.00 0.00 17261.74 6040.71 31229.33 00:14:30.446 =================================================================================================================== 00:14:30.446 Total : 7390.81 28.87 0.00 0.00 17261.74 6040.71 31229.33 00:14:30.446 00:14:30.446 Latency(us) 00:14:30.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.446 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:30.446 Nvme1n1 : 1.00 12305.27 48.07 0.00 0.00 10374.77 4587.52 21313.45 00:14:30.446 =================================================================================================================== 00:14:30.446 Total : 12305.27 48.07 0.00 0.00 10374.77 4587.52 21313.45 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1675166 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1675169 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1675173 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.704 rmmod nvme_tcp 00:14:30.704 rmmod nvme_fabrics 00:14:30.704 rmmod nvme_keyring 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1675005 ']' 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1675005 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1675005 ']' 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1675005 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1675005 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1675005' 00:14:30.704 killing process with pid 1675005 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1675005 00:14:30.704 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1675005 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.962 12:49:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.491 12:49:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.491 00:14:33.491 real 0m11.254s 00:14:33.491 user 0m19.398s 00:14:33.491 sys 0m6.031s 00:14:33.491 12:49:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.491 12:49:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.491 ************************************ 00:14:33.491 END TEST nvmf_bdev_io_wait 00:14:33.491 ************************************ 00:14:33.491 12:49:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:33.491 12:49:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:33.491 12:49:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:33.491 12:49:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.491 12:49:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.491 ************************************ 00:14:33.491 START TEST nvmf_queue_depth 00:14:33.491 ************************************ 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:33.491 * Looking for test storage... 00:14:33.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.491 12:49:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.491 12:49:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.492 12:49:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:38.768 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:38.768 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:38.768 Found net devices under 0000:86:00.0: cvl_0_0 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:38.768 Found net devices under 0000:86:00.1: cvl_0_1 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.768 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:14:38.769 00:14:38.769 --- 10.0.0.2 ping statistics --- 00:14:38.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.769 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:14:38.769 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:14:39.028 00:14:39.028 --- 10.0.0.1 ping statistics --- 00:14:39.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.028 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:39.028 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1679034 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1679034 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1679034 ']' 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.029 12:49:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.029 [2024-07-15 12:49:09.810256] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:39.029 [2024-07-15 12:49:09.810298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.029 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.029 [2024-07-15 12:49:09.883453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.029 [2024-07-15 12:49:09.959623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.029 [2024-07-15 12:49:09.959662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.029 [2024-07-15 12:49:09.959668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.029 [2024-07-15 12:49:09.959674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.029 [2024-07-15 12:49:09.959680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.029 [2024-07-15 12:49:09.959697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.963 [2024-07-15 12:49:10.655603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.963 Malloc0 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.963 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.964 [2024-07-15 12:49:10.717442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1679137 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1679137 /var/tmp/bdevperf.sock 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1679137 ']' 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.964 12:49:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:39.964 [2024-07-15 12:49:10.766422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:39.964 [2024-07-15 12:49:10.766464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679137 ] 00:14:39.964 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.964 [2024-07-15 12:49:10.834240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.964 [2024-07-15 12:49:10.913601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:40.898 NVMe0n1 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.898 12:49:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:40.899 Running I/O for 10 seconds... 00:14:53.183 00:14:53.183 Latency(us) 00:14:53.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.183 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:53.183 Verification LBA range: start 0x0 length 0x4000 00:14:53.183 NVMe0n1 : 10.07 12268.84 47.93 0.00 0.00 83173.93 19375.86 55620.12 00:14:53.183 =================================================================================================================== 00:14:53.183 Total : 12268.84 47.93 0.00 0.00 83173.93 19375.86 55620.12 00:14:53.183 0 00:14:53.183 12:49:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1679137 00:14:53.183 12:49:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1679137 ']' 00:14:53.183 12:49:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1679137 00:14:53.183 12:49:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:53.183 12:49:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.183 12:49:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1679137 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1679137' 00:14:53.183 killing process with pid 1679137 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1679137 00:14:53.183 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.183 00:14:53.183 Latency(us) 00:14:53.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.183 =================================================================================================================== 00:14:53.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1679137 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.183 rmmod nvme_tcp 00:14:53.183 rmmod nvme_fabrics 00:14:53.183 rmmod nvme_keyring 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1679034 ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1679034 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1679034 ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1679034 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1679034 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1679034' 00:14:53.183 killing process with pid 1679034 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1679034 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1679034 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.183 12:49:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.751 12:49:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.751 00:14:53.751 real 0m20.674s 00:14:53.751 user 0m25.066s 00:14:53.751 sys 0m5.917s 00:14:53.751 12:49:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.751 12:49:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:53.751 ************************************ 00:14:53.751 END TEST nvmf_queue_depth 00:14:53.751 ************************************ 00:14:53.751 12:49:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:53.751 12:49:24 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:53.751 12:49:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:53.751 12:49:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.751 12:49:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.751 ************************************ 00:14:53.751 START TEST nvmf_target_multipath 00:14:53.751 ************************************ 00:14:53.751 12:49:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:54.010 * Looking for test storage... 00:14:54.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:54.011 12:49:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:59.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:59.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:59.291 Found net devices under 0000:86:00.0: cvl_0_0 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:59.291 Found net devices under 0000:86:00.1: cvl_0_1 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.291 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:14:59.552 00:14:59.552 --- 10.0.0.2 ping statistics --- 00:14:59.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.552 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:14:59.552 00:14:59.552 --- 10.0.0.1 ping statistics --- 00:14:59.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.552 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:59.552 only one NIC for nvmf test 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.552 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.552 rmmod nvme_tcp 00:14:59.812 rmmod nvme_fabrics 00:14:59.812 rmmod nvme_keyring 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.812 12:49:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:01.717 00:15:01.717 real 0m8.019s 00:15:01.717 user 0m1.652s 00:15:01.717 sys 0m4.345s 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.717 12:49:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:01.717 ************************************ 00:15:01.717 END TEST nvmf_target_multipath 00:15:01.717 ************************************ 00:15:01.977 12:49:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:01.977 12:49:32 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:01.977 12:49:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:01.977 12:49:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.977 12:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:01.977 ************************************ 00:15:01.977 START TEST nvmf_zcopy 00:15:01.977 ************************************ 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:01.977 * Looking for test storage... 00:15:01.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:01.977 12:49:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:08.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:08.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:08.543 Found net devices under 0000:86:00.0: cvl_0_0 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:08.543 Found net devices under 0000:86:00.1: cvl_0_1 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.543 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:08.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:08.544 00:15:08.544 --- 10.0.0.2 ping statistics --- 00:15:08.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.544 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:15:08.544 00:15:08.544 --- 10.0.0.1 ping statistics --- 00:15:08.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.544 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1687934 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1687934 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1687934 ']' 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.544 12:49:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 [2024-07-15 12:49:38.651808] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:08.544 [2024-07-15 12:49:38.651851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.544 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.544 [2024-07-15 12:49:38.721982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.544 [2024-07-15 12:49:38.799345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.544 [2024-07-15 12:49:38.799379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.544 [2024-07-15 12:49:38.799386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.544 [2024-07-15 12:49:38.799392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.544 [2024-07-15 12:49:38.799397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.544 [2024-07-15 12:49:38.799413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 [2024-07-15 12:49:39.491263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.544 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 [2024-07-15 12:49:39.511399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 malloc0 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:08.804 { 00:15:08.804 "params": { 00:15:08.804 "name": "Nvme$subsystem", 00:15:08.804 "trtype": "$TEST_TRANSPORT", 00:15:08.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:08.804 "adrfam": "ipv4", 00:15:08.804 "trsvcid": "$NVMF_PORT", 00:15:08.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:08.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:08.804 "hdgst": ${hdgst:-false}, 00:15:08.804 "ddgst": ${ddgst:-false} 00:15:08.804 }, 00:15:08.804 "method": "bdev_nvme_attach_controller" 00:15:08.804 } 00:15:08.804 EOF 00:15:08.804 )") 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:08.804 12:49:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:08.804 "params": { 00:15:08.804 "name": "Nvme1", 00:15:08.804 "trtype": "tcp", 00:15:08.804 "traddr": "10.0.0.2", 00:15:08.804 "adrfam": "ipv4", 00:15:08.804 "trsvcid": "4420", 00:15:08.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.804 "hdgst": false, 00:15:08.804 "ddgst": false 00:15:08.804 }, 00:15:08.804 "method": "bdev_nvme_attach_controller" 00:15:08.804 }' 00:15:08.804 [2024-07-15 12:49:39.590829] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:08.804 [2024-07-15 12:49:39.590877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688176 ] 00:15:08.804 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.804 [2024-07-15 12:49:39.650912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.804 [2024-07-15 12:49:39.724495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.373 Running I/O for 10 seconds... 00:15:19.351 00:15:19.351 Latency(us) 00:15:19.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.351 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:19.351 Verification LBA range: start 0x0 length 0x1000 00:15:19.351 Nvme1n1 : 10.01 8689.40 67.89 0.00 0.00 14687.86 2778.16 24162.84 00:15:19.351 =================================================================================================================== 00:15:19.351 Total : 8689.40 67.89 0.00 0.00 14687.86 2778.16 24162.84 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1689893 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:19.351 { 00:15:19.351 "params": { 00:15:19.351 "name": "Nvme$subsystem", 00:15:19.351 "trtype": "$TEST_TRANSPORT", 00:15:19.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.351 "adrfam": "ipv4", 00:15:19.351 "trsvcid": "$NVMF_PORT", 00:15:19.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.351 "hdgst": ${hdgst:-false}, 00:15:19.351 "ddgst": ${ddgst:-false} 00:15:19.351 }, 00:15:19.351 "method": "bdev_nvme_attach_controller" 00:15:19.351 } 00:15:19.351 EOF 00:15:19.351 )") 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:19.351 [2024-07-15 12:49:50.265719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.351 [2024-07-15 12:49:50.265754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:19.351 12:49:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:19.351 "params": { 00:15:19.351 "name": "Nvme1", 00:15:19.351 "trtype": "tcp", 00:15:19.351 "traddr": "10.0.0.2", 00:15:19.351 "adrfam": "ipv4", 00:15:19.351 "trsvcid": "4420", 00:15:19.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.351 "hdgst": false, 00:15:19.351 "ddgst": false 00:15:19.351 }, 00:15:19.351 "method": "bdev_nvme_attach_controller" 00:15:19.351 }' 00:15:19.351 [2024-07-15 12:49:50.277720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.351 [2024-07-15 12:49:50.277733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.351 [2024-07-15 12:49:50.285738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.351 [2024-07-15 12:49:50.285751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.351 [2024-07-15 12:49:50.293760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.351 [2024-07-15 12:49:50.293771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.351 [2024-07-15 12:49:50.304423] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:19.351 [2024-07-15 12:49:50.304467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689893 ] 00:15:19.610 [2024-07-15 12:49:50.305795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.305813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.317828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.317838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.610 [2024-07-15 12:49:50.329859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.329870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.341894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.341904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.353925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.353935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.365956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.365966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.372858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.610 [2024-07-15 12:49:50.377991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.378001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.390019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.390031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.402052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.402063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.414091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.414113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.426120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.426132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.438148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.438159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.448331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.610 [2024-07-15 12:49:50.450182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.450195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.462223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.462247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.474262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.474277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.486289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.486302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.498311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.498322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.510341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.510353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.522374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.522384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.534424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.534444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.610 [2024-07-15 12:49:50.546441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.610 [2024-07-15 12:49:50.546456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.611 [2024-07-15 12:49:50.558473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.611 [2024-07-15 12:49:50.558487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.570501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.570512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.582544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.582554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.594579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.594588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.606616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.606630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.618645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.618660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.630680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.630697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 Running I/O for 5 seconds... 00:15:19.870 [2024-07-15 12:49:50.642711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.642723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.653610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.653628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.662322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.662341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.677332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.677352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.693339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.693359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.707582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.707601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.718784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.718804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.727595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.727614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.736222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.736245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.750641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.750661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.764284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.764303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.778366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.778385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.787350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.787369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.802119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.802138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.813199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.813218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.870 [2024-07-15 12:49:50.822060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.870 [2024-07-15 12:49:50.822079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.831454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.831485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.840862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.840881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.855748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.855767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.867117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.867136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.875939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.875958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.885222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.885245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.894571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.894590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.903200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.903218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.917747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.917765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.931787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.931806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.943122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.943141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.957124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.957143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.971053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.971073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.985230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.985249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:50.996520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:50.996541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:51.010588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.129 [2024-07-15 12:49:51.010607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.129 [2024-07-15 12:49:51.024182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.130 [2024-07-15 12:49:51.024201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.130 [2024-07-15 12:49:51.033052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.130 [2024-07-15 12:49:51.033071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.130 [2024-07-15 12:49:51.047686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.130 [2024-07-15 12:49:51.047705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.130 [2024-07-15 12:49:51.061654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.130 [2024-07-15 12:49:51.061673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.130 [2024-07-15 12:49:51.072313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.130 [2024-07-15 12:49:51.072340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.086515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.086536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.095507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.095525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.110248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.110267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.120877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.120895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.135242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.135261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.149017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.149036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.157812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.157831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.172504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.172522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.181520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.181539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.190936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.190958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.205261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.205279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.214276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.214294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.229019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.229037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.240295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.240313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.255033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.255051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.270624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.270642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.284181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.284200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.298331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.298350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.307258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.307277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.321308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.321327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.330091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.330109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.391 [2024-07-15 12:49:51.344475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.391 [2024-07-15 12:49:51.344493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.357564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.357583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.371650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.371669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.380623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.380642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.390382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.390404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.404841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.404860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.418721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.418741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.432683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.432706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.446438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.446458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.455542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.455561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.469802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.469822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.483174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.483194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.498053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.498073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.512859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.512879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.526905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.526924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.540825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.540844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.555199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.555218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.569561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.569581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.580951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.580970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.594990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.595009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.608663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.608681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.622459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.622478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.631383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.631402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.640504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.640523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.649746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.649764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.658938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.658957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.733 [2024-07-15 12:49:51.673836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.733 [2024-07-15 12:49:51.673861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.992 [2024-07-15 12:49:51.685061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.992 [2024-07-15 12:49:51.685083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.992 [2024-07-15 12:49:51.693910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.992 [2024-07-15 12:49:51.693929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.992 [2024-07-15 12:49:51.702736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.992 [2024-07-15 12:49:51.702755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.992 [2024-07-15 12:49:51.711394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.992 [2024-07-15 12:49:51.711413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.992 [2024-07-15 12:49:51.725980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.992 [2024-07-15 12:49:51.725999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.992 [2024-07-15 12:49:51.734923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.734943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.749727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.749745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.760898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.760917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.769816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.769835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.784181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.784201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.792962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.792981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.802359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.802379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.811066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.811085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.820488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.820507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.834886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.834905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.848777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.848797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.857699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.857722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.866392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.866411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.875566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.875589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.890519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.890538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.899493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.899511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.908466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.908484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.917340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.917358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.931922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.931941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.993 [2024-07-15 12:49:51.945631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.993 [2024-07-15 12:49:51.945650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:51.959976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:51.959996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:51.970424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:51.970443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:51.979105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:51.979122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:51.988249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:51.988267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.002828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.002848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.011814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.011834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.021032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.021052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.035705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.035724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.046305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.046324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.060663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.060682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.069687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.069705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.083757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.083776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.092618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.092637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.101645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.101664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.116316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.116334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.130102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.130131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.144312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.144331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.155736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.155754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.164650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.164669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.179602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.179621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.194919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.194938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.252 [2024-07-15 12:49:52.203941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.252 [2024-07-15 12:49:52.203959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.213410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.213429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.222152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.222170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.236771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.236790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.250484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.250502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.264782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.264801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.273686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.273705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.282902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.282920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.297182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.297201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.306177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.306196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.315060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.315078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.324107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.324125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.333402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.333421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.347980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.348000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.360798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.360817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.375492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.375511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.386345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.386364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.395122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.395140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.409787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.409806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.420607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.420626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.429894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.429912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.438593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.438611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.447115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.447134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.512 [2024-07-15 12:49:52.461883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.512 [2024-07-15 12:49:52.461903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.472678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.472697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.487355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.487373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.496363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.496382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.505106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.505125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.519395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.519415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.528453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.528472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.542752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.542771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.556520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.556538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.565481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.565500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.580005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.580023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.594507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.594525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.609974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.609993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.618889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.618907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.628103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.628121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.643429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.643448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.658345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.658364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.672803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.672822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.686765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.686784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.700424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.700443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.714355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.714374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.772 [2024-07-15 12:49:52.723314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.772 [2024-07-15 12:49:52.723332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.737591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.737611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.746430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.746448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.755091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.755110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.769365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.769383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.778193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.778212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.792671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.792690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.806520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.806539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.820580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.820599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.834850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.834868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.850180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.850200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.858999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.859018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.868461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.868480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.882830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.882851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.896480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.896500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.911111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.911131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.922436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.922455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.936801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.936821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.950107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.950126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.964030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.964050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.031 [2024-07-15 12:49:52.972940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.031 [2024-07-15 12:49:52.972959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.290 [2024-07-15 12:49:52.987570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.290 [2024-07-15 12:49:52.987590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:52.996626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:52.996650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.005478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.005497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.019830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.019850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.033473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.033494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.047143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.047163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.056135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.056154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.071265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.071284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.087189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.087209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.096269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.096288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.110888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.110908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.121281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.121301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.135243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.135263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.148876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.148896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.157794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.157813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.166235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.166254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.175430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.175449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.190201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.190221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.201384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.201403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.215474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.215493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.224503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.224526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.291 [2024-07-15 12:49:53.238695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.291 [2024-07-15 12:49:53.238715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.549 [2024-07-15 12:49:53.247607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.549 [2024-07-15 12:49:53.247628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.549 [2024-07-15 12:49:53.262276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.549 [2024-07-15 12:49:53.262295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.549 [2024-07-15 12:49:53.272438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.549 [2024-07-15 12:49:53.272457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.549 [2024-07-15 12:49:53.281322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.549 [2024-07-15 12:49:53.281340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.549 [2024-07-15 12:49:53.290661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.549 [2024-07-15 12:49:53.290679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.549 [2024-07-15 12:49:53.305250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.549 [2024-07-15 12:49:53.305284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.318907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.318925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.333179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.333198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.342117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.342135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.356740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.356759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.367581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.367601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.381963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.381981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.390944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.390963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.405596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.405615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.421107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.421125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.435510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.435528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.446146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.446165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.460831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.460857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.472057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.472076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.486880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.486900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.550 [2024-07-15 12:49:53.498388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.550 [2024-07-15 12:49:53.498407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.513044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.513064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.526693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.526712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.541011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.541029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.550072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.550091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.558749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.558768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.573209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.573233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.582157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.582175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.596570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.596589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.610586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.610605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.624387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.624406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.638519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.638538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.647705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.647723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.662072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.662091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.671105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.671123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.680071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.680089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.694597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.694620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.708252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.708271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.716956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.716974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.726337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.726356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.740838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.740856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.808 [2024-07-15 12:49:53.754597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.808 [2024-07-15 12:49:53.754617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.768475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.768495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.777230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.777249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.792629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.792648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.807707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.807725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.821494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.821512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.835090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.835109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.844140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.844159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.852931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.852950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.862001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.862020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.876738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.876757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.891792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.891812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.906116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.906134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.920297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.920316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.929180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.929198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.943769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.943789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.954955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.954973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.963818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.963837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.978680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.978699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:53.994721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:53.994740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.066 [2024-07-15 12:49:54.008909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.066 [2024-07-15 12:49:54.008928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.022517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.022537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.035825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.035844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.050086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.050107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.061036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.061055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.075377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.075396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.084405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.084424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.093207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.093232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.102583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.102601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.111938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.111956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.126233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.126252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.139722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.139741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.154006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.154024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.167867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.167885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.181963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.181981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.197653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.197673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.211516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.211535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.225102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.225121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.234133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.234151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.243229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.243247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.257337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.257357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.323 [2024-07-15 12:49:54.271365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.323 [2024-07-15 12:49:54.271385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.285242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.285264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.299249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.299271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.313308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.313329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.324142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.324161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.338869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.338889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.353493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.353512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.364739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.364758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.373816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.373835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.388824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.388844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.400146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.400165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.408999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.409018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.417911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.417930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.426625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.426645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.441393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.441412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.450341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.450360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.459754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.459774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.469081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.469101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.477756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.477775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.492914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.492935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.508068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.508087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.581 [2024-07-15 12:49:54.522257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.581 [2024-07-15 12:49:54.522277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.536364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.536385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.545381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.545399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.560056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.560076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.570721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.570740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.585374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.585394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.596830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.596850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.611416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.611435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.624832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.624851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.633788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.633807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.642986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.643004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.652788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.652808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.667216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.667242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.680876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.680896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.689817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.689836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.704654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.704673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.715504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.715524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.724817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.724835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.739203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.739222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.748325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.748344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.757297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.757317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.767286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.767305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.839 [2024-07-15 12:49:54.782081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.839 [2024-07-15 12:49:54.782100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.798032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.798052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.812078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.812097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.826177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.826196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.839736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.839755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.848704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.848727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.862973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.862992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.877253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.877272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.892516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.892535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.906384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.906402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.915375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.915393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.929914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.929933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.943731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.943751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.957872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.957892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.972047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.972068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:54.986081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:54.986100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:55.000106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:55.000125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:55.013821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:55.013840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:55.022631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:55.022649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:55.037315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:55.037334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.097 [2024-07-15 12:49:55.051028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.097 [2024-07-15 12:49:55.051047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.065196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.065217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.074399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.074420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.088348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.088368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.102307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.102331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.115953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.115973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.130157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.130177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.140600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.140618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.149813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.149831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.158393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.158412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.167133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.167153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.181291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.181310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.194826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.194845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.203741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.203759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.217988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.218007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.226849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.226867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.241372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.241391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.254996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.255014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.264050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.264068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.278251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.278271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.287189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.287207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.355 [2024-07-15 12:49:55.301577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.355 [2024-07-15 12:49:55.301596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.310481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.310501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.319716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.319739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.328481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.328500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.337659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.337677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.352258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.352277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.365659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.365678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.374575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.374593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.383808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.383826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.393253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.393271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.408257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.408276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.419391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.419410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.433818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.433836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.613 [2024-07-15 12:49:55.447379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.613 [2024-07-15 12:49:55.447398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.461827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.461846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.473157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.473175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.487804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.487823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.498337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.498355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.512781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.512800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.521665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.521683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.536179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.536198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.549551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.549574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.614 [2024-07-15 12:49:55.563474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.614 [2024-07-15 12:49:55.563493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.577564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.577583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.586583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.586601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.600985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.601004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.610096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.610114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.624486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.624505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.633511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.633530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.642164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.642182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.657067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.657086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 00:15:24.872 Latency(us) 00:15:24.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.872 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:24.872 Nvme1n1 : 5.01 16661.01 130.16 0.00 0.00 7675.20 3333.79 16184.54 00:15:24.872 =================================================================================================================== 00:15:24.872 Total : 16661.01 130.16 0.00 0.00 7675.20 3333.79 16184.54 00:15:24.872 [2024-07-15 12:49:55.665183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.665202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.677210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.677231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.689258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.689274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.701283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.701303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.721344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.721367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.733374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.733391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.745406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.745422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.757437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.757453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.769466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.769491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.781509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.781520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.793544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.793556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.805574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.805585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.872 [2024-07-15 12:49:55.817607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.872 [2024-07-15 12:49:55.817618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.130 [2024-07-15 12:49:55.829644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.130 [2024-07-15 12:49:55.829660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.130 [2024-07-15 12:49:55.837660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.130 [2024-07-15 12:49:55.837670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.130 [2024-07-15 12:49:55.845681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.130 [2024-07-15 12:49:55.845692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1689893) - No such process 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1689893 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:25.130 delay0 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.130 12:49:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:25.130 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.130 [2024-07-15 12:49:56.035302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:31.695 [2024-07-15 12:50:02.175535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0d00 is same with the state(5) to be set 00:15:31.695 Initializing NVMe Controllers 00:15:31.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.695 Initialization complete. Launching workers. 00:15:31.695 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 775 00:15:31.695 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1056, failed to submit 39 00:15:31.695 success 878, unsuccess 178, failed 0 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.695 rmmod nvme_tcp 00:15:31.695 rmmod nvme_fabrics 00:15:31.695 rmmod nvme_keyring 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1687934 ']' 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1687934 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1687934 ']' 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1687934 00:15:31.695 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1687934 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1687934' 00:15:31.696 killing process with pid 1687934 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1687934 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1687934 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.696 12:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.602 12:50:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.602 00:15:33.602 real 0m31.816s 00:15:33.602 user 0m43.166s 00:15:33.602 sys 0m10.599s 00:15:33.602 12:50:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.602 12:50:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.602 ************************************ 00:15:33.602 END TEST nvmf_zcopy 00:15:33.602 ************************************ 00:15:33.863 12:50:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:33.863 12:50:04 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:33.863 12:50:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:33.863 12:50:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.863 12:50:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:33.863 ************************************ 00:15:33.863 START TEST nvmf_nmic 00:15:33.863 ************************************ 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:33.863 * Looking for test storage... 00:15:33.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.863 12:50:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:40.437 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:40.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:40.437 Found net devices under 0000:86:00.0: cvl_0_0 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:40.437 Found net devices under 0000:86:00.1: cvl_0_1 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.437 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:15:40.438 00:15:40.438 --- 10.0.0.2 ping statistics --- 00:15:40.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.438 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:15:40.438 00:15:40.438 --- 10.0.0.1 ping statistics --- 00:15:40.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.438 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1695372 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1695372 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1695372 ']' 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.438 12:50:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.438 [2024-07-15 12:50:10.530843] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:40.438 [2024-07-15 12:50:10.530885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.438 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.438 [2024-07-15 12:50:10.599063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.438 [2024-07-15 12:50:10.679965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.438 [2024-07-15 12:50:10.679998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.438 [2024-07-15 12:50:10.680005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.438 [2024-07-15 12:50:10.680011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.438 [2024-07-15 12:50:10.680016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.438 [2024-07-15 12:50:10.680058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.438 [2024-07-15 12:50:10.680167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.438 [2024-07-15 12:50:10.680272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.438 [2024-07-15 12:50:10.680272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.438 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.438 [2024-07-15 12:50:11.391223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 Malloc0 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 [2024-07-15 12:50:11.443134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:40.698 test case1: single bdev can't be used in multiple subsystems 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 [2024-07-15 12:50:11.467044] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:40.698 [2024-07-15 12:50:11.467063] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:40.698 [2024-07-15 12:50:11.467070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.698 request: 00:15:40.698 { 00:15:40.698 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:40.698 "namespace": { 00:15:40.698 "bdev_name": "Malloc0", 00:15:40.698 "no_auto_visible": false 00:15:40.698 }, 00:15:40.698 "method": "nvmf_subsystem_add_ns", 00:15:40.698 "req_id": 1 00:15:40.698 } 00:15:40.698 Got JSON-RPC error response 00:15:40.698 response: 00:15:40.698 { 00:15:40.698 "code": -32602, 00:15:40.698 "message": "Invalid parameters" 00:15:40.698 } 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:40.698 Adding namespace failed - expected result. 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:40.698 test case2: host connect to nvmf target in multiple paths 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 [2024-07-15 12:50:11.479165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.698 12:50:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.073 12:50:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:43.008 12:50:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.008 12:50:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:43.008 12:50:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.008 12:50:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:43.008 12:50:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:44.958 12:50:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:44.958 [global] 00:15:44.958 thread=1 00:15:44.958 invalidate=1 00:15:44.958 rw=write 00:15:44.958 time_based=1 00:15:44.958 runtime=1 00:15:44.958 ioengine=libaio 00:15:44.958 direct=1 00:15:44.958 bs=4096 00:15:44.958 iodepth=1 00:15:44.958 norandommap=0 00:15:44.958 numjobs=1 00:15:44.958 00:15:44.958 verify_dump=1 00:15:44.958 verify_backlog=512 00:15:44.958 verify_state_save=0 00:15:44.958 do_verify=1 00:15:44.958 verify=crc32c-intel 00:15:44.958 [job0] 00:15:44.958 filename=/dev/nvme0n1 00:15:45.216 Could not set queue depth (nvme0n1) 00:15:45.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.474 fio-3.35 00:15:45.474 Starting 1 thread 00:15:46.409 00:15:46.409 job0: (groupid=0, jobs=1): err= 0: pid=1696445: Mon Jul 15 12:50:17 2024 00:15:46.409 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:46.409 slat (nsec): min=6423, max=25993, avg=7195.49, stdev=1046.29 00:15:46.409 clat (usec): min=185, max=431, avg=264.00, stdev=21.94 00:15:46.409 lat (usec): min=192, max=454, avg=271.19, stdev=22.12 00:15:46.409 clat percentiles (usec): 00:15:46.409 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 233], 20.00th=[ 260], 00:15:46.409 | 30.00th=[ 269], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 273], 00:15:46.409 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 281], 95.00th=[ 281], 00:15:46.409 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 416], 99.95th=[ 424], 00:15:46.409 | 99.99th=[ 433] 00:15:46.409 write: IOPS=2489, BW=9958KiB/s (10.2MB/s)(9968KiB/1001msec); 0 zone resets 00:15:46.409 slat (nsec): min=8048, max=39055, avg=10055.45, stdev=1497.93 00:15:46.409 clat (usec): min=135, max=415, avg=164.53, stdev=31.99 00:15:46.409 lat (usec): min=145, max=454, avg=174.58, stdev=32.31 00:15:46.409 clat percentiles (usec): 00:15:46.409 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:15:46.409 | 30.00th=[ 151], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:15:46.409 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 241], 95.00th=[ 243], 00:15:46.409 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 318], 99.95th=[ 383], 00:15:46.409 | 99.99th=[ 416] 00:15:46.409 bw ( KiB/s): min=10264, max=10264, per=100.00%, avg=10264.00, stdev= 0.00, samples=1 00:15:46.409 iops : min= 2566, max= 2566, avg=2566.00, stdev= 0.00, samples=1 00:15:46.409 lat (usec) : 250=62.42%, 500=37.58% 00:15:46.409 cpu : usr=2.20%, sys=4.10%, ctx=4541, majf=0, minf=2 00:15:46.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.409 issued rwts: total=2048,2492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.409 00:15:46.409 Run status group 0 (all jobs): 00:15:46.409 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:15:46.409 WRITE: bw=9958KiB/s (10.2MB/s), 9958KiB/s-9958KiB/s (10.2MB/s-10.2MB/s), io=9968KiB (10.2MB), run=1001-1001msec 00:15:46.409 00:15:46.409 Disk stats (read/write): 00:15:46.409 nvme0n1: ios=1991/2048, merge=0/0, ticks=528/323, in_queue=851, util=91.18% 00:15:46.409 12:50:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.668 rmmod nvme_tcp 00:15:46.668 rmmod nvme_fabrics 00:15:46.668 rmmod nvme_keyring 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1695372 ']' 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1695372 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1695372 ']' 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1695372 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.668 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1695372 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1695372' 00:15:46.927 killing process with pid 1695372 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1695372 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1695372 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.927 12:50:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.465 12:50:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.465 00:15:49.465 real 0m15.302s 00:15:49.465 user 0m35.705s 00:15:49.465 sys 0m5.155s 00:15:49.465 12:50:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.465 12:50:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:49.465 ************************************ 00:15:49.465 END TEST nvmf_nmic 00:15:49.465 ************************************ 00:15:49.465 12:50:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:49.465 12:50:19 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:49.465 12:50:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:49.465 12:50:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.465 12:50:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.465 ************************************ 00:15:49.465 START TEST nvmf_fio_target 00:15:49.465 ************************************ 00:15:49.465 12:50:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:49.465 * Looking for test storage... 00:15:49.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.465 12:50:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:54.742 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:54.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:54.742 Found net devices under 0000:86:00.0: cvl_0_0 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.742 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:54.742 Found net devices under 0000:86:00.1: cvl_0_1 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.743 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.002 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:15:55.002 00:15:55.002 --- 10.0.0.2 ping statistics --- 00:15:55.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.002 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:15:55.003 00:15:55.003 --- 10.0.0.1 ping statistics --- 00:15:55.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.003 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1700198 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1700198 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1700198 ']' 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.003 12:50:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.003 [2024-07-15 12:50:25.945635] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:55.003 [2024-07-15 12:50:25.945678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.263 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.263 [2024-07-15 12:50:26.016061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.263 [2024-07-15 12:50:26.093936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.263 [2024-07-15 12:50:26.093973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.263 [2024-07-15 12:50:26.093980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.263 [2024-07-15 12:50:26.093987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.263 [2024-07-15 12:50:26.093992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.263 [2024-07-15 12:50:26.094099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.263 [2024-07-15 12:50:26.094207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.263 [2024-07-15 12:50:26.094314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.263 [2024-07-15 12:50:26.094314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.832 12:50:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:56.091 [2024-07-15 12:50:26.946621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.091 12:50:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.350 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:56.350 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.609 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:56.609 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.868 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:56.868 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.868 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:56.868 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:57.127 12:50:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:57.386 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:57.386 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:57.645 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:57.645 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:57.645 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:57.645 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:57.904 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:58.163 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:58.163 12:50:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.163 12:50:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:58.163 12:50:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.421 12:50:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.679 [2024-07-15 12:50:29.428987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.679 12:50:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:58.936 12:50:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:58.936 12:50:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.311 12:50:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:00.311 12:50:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:00.311 12:50:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.311 12:50:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:00.311 12:50:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:00.311 12:50:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:02.215 12:50:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:02.215 [global] 00:16:02.215 thread=1 00:16:02.215 invalidate=1 00:16:02.215 rw=write 00:16:02.215 time_based=1 00:16:02.215 runtime=1 00:16:02.215 ioengine=libaio 00:16:02.215 direct=1 00:16:02.215 bs=4096 00:16:02.215 iodepth=1 00:16:02.215 norandommap=0 00:16:02.215 numjobs=1 00:16:02.215 00:16:02.215 verify_dump=1 00:16:02.215 verify_backlog=512 00:16:02.215 verify_state_save=0 00:16:02.215 do_verify=1 00:16:02.215 verify=crc32c-intel 00:16:02.215 [job0] 00:16:02.215 filename=/dev/nvme0n1 00:16:02.215 [job1] 00:16:02.215 filename=/dev/nvme0n2 00:16:02.215 [job2] 00:16:02.215 filename=/dev/nvme0n3 00:16:02.215 [job3] 00:16:02.215 filename=/dev/nvme0n4 00:16:02.215 Could not set queue depth (nvme0n1) 00:16:02.215 Could not set queue depth (nvme0n2) 00:16:02.215 Could not set queue depth (nvme0n3) 00:16:02.215 Could not set queue depth (nvme0n4) 00:16:02.472 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.472 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.472 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.472 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.472 fio-3.35 00:16:02.472 Starting 4 threads 00:16:03.885 00:16:03.885 job0: (groupid=0, jobs=1): err= 0: pid=1701544: Mon Jul 15 12:50:34 2024 00:16:03.885 read: IOPS=239, BW=959KiB/s (982kB/s)(988KiB/1030msec) 00:16:03.885 slat (nsec): min=6381, max=24513, avg=8432.10, stdev=4221.81 00:16:03.885 clat (usec): min=220, max=41973, avg=3742.67, stdev=11436.68 00:16:03.885 lat (usec): min=227, max=41995, avg=3751.10, stdev=11439.73 00:16:03.885 clat percentiles (usec): 00:16:03.885 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:16:03.885 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:16:03.885 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[41157], 00:16:03.885 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:03.885 | 99.99th=[42206] 00:16:03.885 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:16:03.885 slat (nsec): min=8920, max=38564, avg=10828.62, stdev=2198.31 00:16:03.885 clat (usec): min=144, max=369, avg=187.49, stdev=28.12 00:16:03.885 lat (usec): min=153, max=408, avg=198.32, stdev=28.40 00:16:03.885 clat percentiles (usec): 00:16:03.885 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:16:03.885 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:16:03.885 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 241], 95.00th=[ 243], 00:16:03.885 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 371], 99.95th=[ 371], 00:16:03.885 | 99.99th=[ 371] 00:16:03.885 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.886 lat (usec) : 250=76.42%, 500=20.82% 00:16:03.886 lat (msec) : 50=2.77% 00:16:03.886 cpu : usr=0.29%, sys=0.78%, ctx=759, majf=0, minf=1 00:16:03.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 issued rwts: total=247,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.886 job1: (groupid=0, jobs=1): err= 0: pid=1701546: Mon Jul 15 12:50:34 2024 00:16:03.886 read: IOPS=1308, BW=5235KiB/s (5360kB/s)(5376KiB/1027msec) 00:16:03.886 slat (nsec): min=6094, max=38543, avg=8905.18, stdev=2130.10 00:16:03.886 clat (usec): min=216, max=42025, avg=535.75, stdev=3334.75 00:16:03.886 lat (usec): min=224, max=42035, avg=544.66, stdev=3335.47 00:16:03.886 clat percentiles (usec): 00:16:03.886 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:16:03.886 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:16:03.886 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 318], 00:16:03.886 | 99.00th=[ 383], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:16:03.886 | 99.99th=[42206] 00:16:03.886 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:16:03.886 slat (nsec): min=4856, max=41944, avg=11438.76, stdev=2482.03 00:16:03.886 clat (usec): min=128, max=372, avg=174.69, stdev=28.12 00:16:03.886 lat (usec): min=133, max=409, avg=186.13, stdev=27.41 00:16:03.886 clat percentiles (usec): 00:16:03.886 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:16:03.886 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:16:03.886 | 70.00th=[ 182], 80.00th=[ 196], 90.00th=[ 215], 95.00th=[ 235], 00:16:03.886 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 367], 99.95th=[ 375], 00:16:03.886 | 99.99th=[ 375] 00:16:03.886 bw ( KiB/s): min= 5728, max= 6560, per=51.50%, avg=6144.00, stdev=588.31, samples=2 00:16:03.886 iops : min= 1432, max= 1640, avg=1536.00, stdev=147.08, samples=2 00:16:03.886 lat (usec) : 250=72.29%, 500=27.33%, 750=0.03% 00:16:03.886 lat (msec) : 4=0.03%, 50=0.31% 00:16:03.886 cpu : usr=1.36%, sys=5.26%, ctx=2881, majf=0, minf=1 00:16:03.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 issued rwts: total=1344,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.886 job2: (groupid=0, jobs=1): err= 0: pid=1701548: Mon Jul 15 12:50:34 2024 00:16:03.886 read: IOPS=22, BW=90.0KiB/s (92.2kB/s)(92.0KiB/1022msec) 00:16:03.886 slat (nsec): min=10139, max=25990, avg=22017.48, stdev=3713.94 00:16:03.886 clat (usec): min=302, max=41383, avg=39197.82, stdev=8479.98 00:16:03.886 lat (usec): min=325, max=41393, avg=39219.84, stdev=8479.63 00:16:03.886 clat percentiles (usec): 00:16:03.886 | 1.00th=[ 302], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:16:03.886 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:03.886 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:03.886 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:03.886 | 99.99th=[41157] 00:16:03.886 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:16:03.886 slat (nsec): min=9676, max=36914, avg=12475.14, stdev=2081.74 00:16:03.886 clat (usec): min=150, max=307, avg=217.77, stdev=28.77 00:16:03.886 lat (usec): min=163, max=344, avg=230.24, stdev=29.25 00:16:03.886 clat percentiles (usec): 00:16:03.886 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 180], 20.00th=[ 188], 00:16:03.886 | 30.00th=[ 196], 40.00th=[ 215], 50.00th=[ 235], 60.00th=[ 239], 00:16:03.886 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:16:03.886 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 310], 00:16:03.886 | 99.99th=[ 310] 00:16:03.886 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.886 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.886 lat (usec) : 250=92.71%, 500=3.18% 00:16:03.886 lat (msec) : 50=4.11% 00:16:03.886 cpu : usr=0.49%, sys=0.78%, ctx=536, majf=0, minf=1 00:16:03.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.886 job3: (groupid=0, jobs=1): err= 0: pid=1701549: Mon Jul 15 12:50:34 2024 00:16:03.886 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:16:03.886 slat (nsec): min=11096, max=23953, avg=21939.95, stdev=2482.07 00:16:03.886 clat (usec): min=40831, max=41222, avg=40980.50, stdev=75.28 00:16:03.886 lat (usec): min=40853, max=41234, avg=41002.44, stdev=73.63 00:16:03.886 clat percentiles (usec): 00:16:03.886 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:03.886 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:03.886 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:03.886 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:03.886 | 99.99th=[41157] 00:16:03.886 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:16:03.886 slat (nsec): min=10978, max=42959, avg=12377.77, stdev=2086.20 00:16:03.886 clat (usec): min=148, max=782, avg=204.76, stdev=47.42 00:16:03.886 lat (usec): min=160, max=795, avg=217.14, stdev=47.68 00:16:03.886 clat percentiles (usec): 00:16:03.886 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:16:03.886 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:16:03.886 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 251], 00:16:03.886 | 99.00th=[ 310], 99.50th=[ 603], 99.90th=[ 783], 99.95th=[ 783], 00:16:03.886 | 99.99th=[ 783] 00:16:03.886 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.886 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.886 lat (usec) : 250=91.01%, 500=4.12%, 750=0.56%, 1000=0.19% 00:16:03.886 lat (msec) : 50=4.12% 00:16:03.886 cpu : usr=0.30%, sys=1.08%, ctx=535, majf=0, minf=2 00:16:03.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.886 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.886 00:16:03.886 Run status group 0 (all jobs): 00:16:03.886 READ: bw=6353KiB/s (6506kB/s), 86.7KiB/s-5235KiB/s (88.8kB/s-5360kB/s), io=6544KiB (6701kB), run=1015-1030msec 00:16:03.886 WRITE: bw=11.7MiB/s (12.2MB/s), 1988KiB/s-5982KiB/s (2036kB/s-6126kB/s), io=12.0MiB (12.6MB), run=1015-1030msec 00:16:03.886 00:16:03.886 Disk stats (read/write): 00:16:03.886 nvme0n1: ios=291/512, merge=0/0, ticks=696/87, in_queue=783, util=81.85% 00:16:03.886 nvme0n2: ios=1122/1536, merge=0/0, ticks=484/248, in_queue=732, util=82.78% 00:16:03.886 nvme0n3: ios=57/512, merge=0/0, ticks=1134/106, in_queue=1240, util=97.93% 00:16:03.886 nvme0n4: ios=63/512, merge=0/0, ticks=883/102, in_queue=985, util=97.78% 00:16:03.886 12:50:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:03.886 [global] 00:16:03.886 thread=1 00:16:03.886 invalidate=1 00:16:03.886 rw=randwrite 00:16:03.886 time_based=1 00:16:03.886 runtime=1 00:16:03.886 ioengine=libaio 00:16:03.886 direct=1 00:16:03.886 bs=4096 00:16:03.886 iodepth=1 00:16:03.886 norandommap=0 00:16:03.886 numjobs=1 00:16:03.886 00:16:03.886 verify_dump=1 00:16:03.886 verify_backlog=512 00:16:03.886 verify_state_save=0 00:16:03.886 do_verify=1 00:16:03.886 verify=crc32c-intel 00:16:03.886 [job0] 00:16:03.886 filename=/dev/nvme0n1 00:16:03.886 [job1] 00:16:03.886 filename=/dev/nvme0n2 00:16:03.886 [job2] 00:16:03.886 filename=/dev/nvme0n3 00:16:03.886 [job3] 00:16:03.886 filename=/dev/nvme0n4 00:16:03.886 Could not set queue depth (nvme0n1) 00:16:03.886 Could not set queue depth (nvme0n2) 00:16:03.886 Could not set queue depth (nvme0n3) 00:16:03.886 Could not set queue depth (nvme0n4) 00:16:04.144 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.144 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.144 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.144 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.144 fio-3.35 00:16:04.144 Starting 4 threads 00:16:05.521 00:16:05.521 job0: (groupid=0, jobs=1): err= 0: pid=1701927: Mon Jul 15 12:50:36 2024 00:16:05.521 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:16:05.521 slat (nsec): min=10851, max=36033, avg=23254.33, stdev=3991.79 00:16:05.521 clat (usec): min=40896, max=42031, avg=41312.05, stdev=453.19 00:16:05.521 lat (usec): min=40919, max=42055, avg=41335.30, stdev=452.98 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:05.521 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:05.521 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:05.521 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:05.521 | 99.99th=[42206] 00:16:05.521 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:05.521 slat (usec): min=4, max=25710, avg=59.31, stdev=1135.85 00:16:05.521 clat (usec): min=151, max=723, avg=191.47, stdev=37.06 00:16:05.521 lat (usec): min=155, max=26015, avg=250.78, stdev=1141.50 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:16:05.521 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:16:05.521 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 221], 95.00th=[ 241], 00:16:05.521 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 725], 99.95th=[ 725], 00:16:05.521 | 99.99th=[ 725] 00:16:05.521 bw ( KiB/s): min= 4096, max= 4096, per=20.50%, avg=4096.00, stdev= 0.00, samples=1 00:16:05.521 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:05.521 lat (usec) : 250=92.50%, 500=3.38%, 750=0.19% 00:16:05.521 lat (msec) : 50=3.94% 00:16:05.521 cpu : usr=0.30%, sys=0.40%, ctx=535, majf=0, minf=1 00:16:05.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.521 job1: (groupid=0, jobs=1): err= 0: pid=1701928: Mon Jul 15 12:50:36 2024 00:16:05.521 read: IOPS=1892, BW=7568KiB/s (7750kB/s)(7576KiB/1001msec) 00:16:05.521 slat (nsec): min=7231, max=78627, avg=8252.33, stdev=2237.98 00:16:05.521 clat (usec): min=234, max=561, avg=304.75, stdev=70.41 00:16:05.521 lat (usec): min=242, max=570, avg=313.00, stdev=70.53 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:16:05.521 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:16:05.521 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 441], 95.00th=[ 469], 00:16:05.521 | 99.00th=[ 510], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 562], 00:16:05.521 | 99.99th=[ 562] 00:16:05.521 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:05.521 slat (nsec): min=3375, max=82757, avg=11176.79, stdev=3134.63 00:16:05.521 clat (usec): min=137, max=820, avg=180.41, stdev=34.81 00:16:05.521 lat (usec): min=147, max=823, avg=191.59, stdev=34.61 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:16:05.521 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:16:05.521 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 225], 00:16:05.521 | 99.00th=[ 277], 99.50th=[ 318], 99.90th=[ 562], 99.95th=[ 627], 00:16:05.521 | 99.99th=[ 824] 00:16:05.521 bw ( KiB/s): min= 8192, max= 8192, per=41.00%, avg=8192.00, stdev= 0.00, samples=1 00:16:05.521 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:05.521 lat (usec) : 250=53.70%, 500=45.48%, 750=0.79%, 1000=0.03% 00:16:05.521 cpu : usr=3.60%, sys=5.60%, ctx=3943, majf=0, minf=2 00:16:05.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 issued rwts: total=1894,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.521 job2: (groupid=0, jobs=1): err= 0: pid=1701929: Mon Jul 15 12:50:36 2024 00:16:05.521 read: IOPS=514, BW=2057KiB/s (2106kB/s)(2108KiB/1025msec) 00:16:05.521 slat (nsec): min=7232, max=28974, avg=8836.44, stdev=2654.90 00:16:05.521 clat (usec): min=228, max=41077, avg=1517.67, stdev=6760.79 00:16:05.521 lat (usec): min=236, max=41088, avg=1526.51, stdev=6763.17 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 243], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 277], 00:16:05.521 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 433], 00:16:05.521 | 70.00th=[ 449], 80.00th=[ 465], 90.00th=[ 486], 95.00th=[ 502], 00:16:05.521 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:05.521 | 99.99th=[41157] 00:16:05.521 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:16:05.521 slat (nsec): min=10771, max=40263, avg=12148.85, stdev=1883.05 00:16:05.521 clat (usec): min=145, max=649, avg=195.08, stdev=35.09 00:16:05.521 lat (usec): min=158, max=661, avg=207.23, stdev=35.53 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:16:05.521 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:16:05.521 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 241], 00:16:05.521 | 99.00th=[ 326], 99.50th=[ 408], 99.90th=[ 562], 99.95th=[ 652], 00:16:05.521 | 99.99th=[ 652] 00:16:05.521 bw ( KiB/s): min= 8192, max= 8192, per=41.00%, avg=8192.00, stdev= 0.00, samples=1 00:16:05.521 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:05.521 lat (usec) : 250=64.28%, 500=33.53%, 750=1.23% 00:16:05.521 lat (msec) : 50=0.97% 00:16:05.521 cpu : usr=1.56%, sys=2.25%, ctx=1552, majf=0, minf=1 00:16:05.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.521 job3: (groupid=0, jobs=1): err= 0: pid=1701930: Mon Jul 15 12:50:36 2024 00:16:05.521 read: IOPS=1243, BW=4973KiB/s (5092kB/s)(5092KiB/1024msec) 00:16:05.521 slat (nsec): min=6673, max=25534, avg=7587.86, stdev=1828.77 00:16:05.521 clat (usec): min=220, max=42063, avg=554.30, stdev=3473.29 00:16:05.521 lat (usec): min=227, max=42086, avg=561.89, stdev=3474.50 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 251], 00:16:05.521 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:16:05.521 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:16:05.521 | 99.00th=[ 314], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:05.521 | 99.99th=[42206] 00:16:05.521 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:16:05.521 slat (nsec): min=8422, max=36185, avg=10486.08, stdev=1299.59 00:16:05.521 clat (usec): min=143, max=800, avg=184.18, stdev=26.73 00:16:05.521 lat (usec): min=153, max=811, avg=194.67, stdev=27.03 00:16:05.521 clat percentiles (usec): 00:16:05.521 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:16:05.521 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:16:05.521 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:16:05.521 | 99.00th=[ 237], 99.50th=[ 326], 99.90th=[ 553], 99.95th=[ 799], 00:16:05.521 | 99.99th=[ 799] 00:16:05.521 bw ( KiB/s): min= 3568, max= 8720, per=30.75%, avg=6144.00, stdev=3643.01, samples=2 00:16:05.521 iops : min= 892, max= 2180, avg=1536.00, stdev=910.75, samples=2 00:16:05.521 lat (usec) : 250=63.51%, 500=36.10%, 750=0.04%, 1000=0.04% 00:16:05.521 lat (msec) : 50=0.32% 00:16:05.521 cpu : usr=1.66%, sys=2.25%, ctx=2810, majf=0, minf=1 00:16:05.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.521 issued rwts: total=1273,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.521 00:16:05.521 Run status group 0 (all jobs): 00:16:05.521 READ: bw=14.2MiB/s (14.8MB/s), 83.9KiB/s-7568KiB/s (85.9kB/s-7750kB/s), io=14.5MiB (15.2MB), run=1001-1025msec 00:16:05.521 WRITE: bw=19.5MiB/s (20.5MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1025msec 00:16:05.521 00:16:05.522 Disk stats (read/write): 00:16:05.522 nvme0n1: ios=69/512, merge=0/0, ticks=957/99, in_queue=1056, util=98.20% 00:16:05.522 nvme0n2: ios=1573/1793, merge=0/0, ticks=693/319, in_queue=1012, util=97.77% 00:16:05.522 nvme0n3: ios=565/1024, merge=0/0, ticks=909/189, in_queue=1098, util=97.20% 00:16:05.522 nvme0n4: ios=1312/1536, merge=0/0, ticks=1417/280, in_queue=1697, util=96.24% 00:16:05.522 12:50:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:05.522 [global] 00:16:05.522 thread=1 00:16:05.522 invalidate=1 00:16:05.522 rw=write 00:16:05.522 time_based=1 00:16:05.522 runtime=1 00:16:05.522 ioengine=libaio 00:16:05.522 direct=1 00:16:05.522 bs=4096 00:16:05.522 iodepth=128 00:16:05.522 norandommap=0 00:16:05.522 numjobs=1 00:16:05.522 00:16:05.522 verify_dump=1 00:16:05.522 verify_backlog=512 00:16:05.522 verify_state_save=0 00:16:05.522 do_verify=1 00:16:05.522 verify=crc32c-intel 00:16:05.522 [job0] 00:16:05.522 filename=/dev/nvme0n1 00:16:05.522 [job1] 00:16:05.522 filename=/dev/nvme0n2 00:16:05.522 [job2] 00:16:05.522 filename=/dev/nvme0n3 00:16:05.522 [job3] 00:16:05.522 filename=/dev/nvme0n4 00:16:05.522 Could not set queue depth (nvme0n1) 00:16:05.522 Could not set queue depth (nvme0n2) 00:16:05.522 Could not set queue depth (nvme0n3) 00:16:05.522 Could not set queue depth (nvme0n4) 00:16:05.781 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.781 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.781 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.781 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.781 fio-3.35 00:16:05.781 Starting 4 threads 00:16:07.191 00:16:07.191 job0: (groupid=0, jobs=1): err= 0: pid=1702296: Mon Jul 15 12:50:37 2024 00:16:07.191 read: IOPS=4156, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1010msec) 00:16:07.191 slat (nsec): min=1383, max=9920.0k, avg=130840.75, stdev=822586.83 00:16:07.191 clat (usec): min=3596, max=63408, avg=14285.15, stdev=11431.98 00:16:07.191 lat (usec): min=3603, max=63418, avg=14415.99, stdev=11510.37 00:16:07.191 clat percentiles (usec): 00:16:07.191 | 1.00th=[ 5604], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:16:07.191 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:16:07.191 | 70.00th=[11338], 80.00th=[13304], 90.00th=[18220], 95.00th=[47973], 00:16:07.191 | 99.00th=[61080], 99.50th=[62129], 99.90th=[63177], 99.95th=[63177], 00:16:07.191 | 99.99th=[63177] 00:16:07.191 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:16:07.191 slat (usec): min=2, max=9404, avg=90.42, stdev=378.80 00:16:07.191 clat (usec): min=1679, max=63411, avg=14671.73, stdev=9262.81 00:16:07.191 lat (usec): min=1696, max=63424, avg=14762.15, stdev=9296.85 00:16:07.191 clat percentiles (usec): 00:16:07.191 | 1.00th=[ 3392], 5.00th=[ 5604], 10.00th=[ 6849], 20.00th=[ 8979], 00:16:07.191 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[12649], 00:16:07.192 | 70.00th=[19268], 80.00th=[20317], 90.00th=[20579], 95.00th=[33424], 00:16:07.192 | 99.00th=[54264], 99.50th=[55313], 99.90th=[63177], 99.95th=[63177], 00:16:07.192 | 99.99th=[63177] 00:16:07.192 bw ( KiB/s): min=12544, max=24112, per=25.72%, avg=18328.00, stdev=8179.81, samples=2 00:16:07.192 iops : min= 3136, max= 6028, avg=4582.00, stdev=2044.95, samples=2 00:16:07.192 lat (msec) : 2=0.05%, 4=1.09%, 10=32.09%, 20=49.57%, 50=13.89% 00:16:07.192 lat (msec) : 100=3.32% 00:16:07.192 cpu : usr=4.66%, sys=4.16%, ctx=566, majf=0, minf=1 00:16:07.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:07.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.192 issued rwts: total=4198,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.192 job1: (groupid=0, jobs=1): err= 0: pid=1702297: Mon Jul 15 12:50:37 2024 00:16:07.192 read: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1009msec) 00:16:07.192 slat (nsec): min=1504, max=17125k, avg=139212.26, stdev=981727.36 00:16:07.192 clat (usec): min=6012, max=46515, avg=16679.65, stdev=6892.48 00:16:07.192 lat (usec): min=6018, max=46520, avg=16818.86, stdev=6973.52 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 6783], 5.00th=[10552], 10.00th=[10683], 20.00th=[10945], 00:16:07.192 | 30.00th=[11076], 40.00th=[11863], 50.00th=[14091], 60.00th=[18220], 00:16:07.192 | 70.00th=[21365], 80.00th=[22152], 90.00th=[23987], 95.00th=[28443], 00:16:07.192 | 99.00th=[41157], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:16:07.192 | 99.99th=[46400] 00:16:07.192 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:16:07.192 slat (usec): min=2, max=26127, avg=181.38, stdev=1016.20 00:16:07.192 clat (usec): min=2715, max=68320, avg=25141.88, stdev=11296.72 00:16:07.192 lat (usec): min=2726, max=68346, avg=25323.26, stdev=11343.25 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 4359], 5.00th=[14484], 10.00th=[19006], 20.00th=[20055], 00:16:07.192 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20579], 60.00th=[20841], 00:16:07.192 | 70.00th=[22152], 80.00th=[28181], 90.00th=[45876], 95.00th=[50594], 00:16:07.192 | 99.00th=[64750], 99.50th=[66847], 99.90th=[67634], 99.95th=[68682], 00:16:07.192 | 99.99th=[68682] 00:16:07.192 bw ( KiB/s): min=12288, max=12288, per=17.25%, avg=12288.00, stdev= 0.00, samples=2 00:16:07.192 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:07.192 lat (msec) : 4=0.39%, 10=2.98%, 20=41.19%, 50=52.55%, 100=2.88% 00:16:07.192 cpu : usr=2.88%, sys=3.57%, ctx=399, majf=0, minf=1 00:16:07.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:07.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.192 issued rwts: total=3029,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.192 job2: (groupid=0, jobs=1): err= 0: pid=1702298: Mon Jul 15 12:50:37 2024 00:16:07.192 read: IOPS=5540, BW=21.6MiB/s (22.7MB/s)(21.8MiB/1007msec) 00:16:07.192 slat (nsec): min=1258, max=10585k, avg=99014.77, stdev=706528.97 00:16:07.192 clat (usec): min=3068, max=28099, avg=12130.37, stdev=2788.32 00:16:07.192 lat (usec): min=3858, max=28110, avg=12229.38, stdev=2844.02 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 4817], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[10683], 00:16:07.192 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:16:07.192 | 70.00th=[11994], 80.00th=[13304], 90.00th=[16581], 95.00th=[18482], 00:16:07.192 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:16:07.192 | 99.99th=[28181] 00:16:07.192 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:16:07.192 slat (usec): min=2, max=9231, avg=74.83, stdev=367.82 00:16:07.192 clat (usec): min=2380, max=21623, avg=10550.07, stdev=2160.02 00:16:07.192 lat (usec): min=2391, max=21627, avg=10624.90, stdev=2193.41 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 3261], 5.00th=[ 5276], 10.00th=[ 7242], 20.00th=[ 9503], 00:16:07.192 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:16:07.192 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[11994], 00:16:07.192 | 99.00th=[12256], 99.50th=[15795], 99.90th=[21103], 99.95th=[21365], 00:16:07.192 | 99.99th=[21627] 00:16:07.192 bw ( KiB/s): min=21392, max=23664, per=31.62%, avg=22528.00, stdev=1606.55, samples=2 00:16:07.192 iops : min= 5348, max= 5916, avg=5632.00, stdev=401.64, samples=2 00:16:07.192 lat (msec) : 4=1.14%, 10=14.63%, 20=83.23%, 50=1.00% 00:16:07.192 cpu : usr=3.88%, sys=5.27%, ctx=663, majf=0, minf=1 00:16:07.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:07.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.192 issued rwts: total=5579,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.192 job3: (groupid=0, jobs=1): err= 0: pid=1702303: Mon Jul 15 12:50:37 2024 00:16:07.192 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:16:07.192 slat (nsec): min=1145, max=17139k, avg=110372.21, stdev=788516.04 00:16:07.192 clat (usec): min=6958, max=37973, avg=14349.55, stdev=5532.64 00:16:07.192 lat (usec): min=6962, max=37995, avg=14459.92, stdev=5593.53 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11338], 00:16:07.192 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:16:07.192 | 70.00th=[13698], 80.00th=[20841], 90.00th=[21627], 95.00th=[27132], 00:16:07.192 | 99.00th=[31851], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:16:07.192 | 99.99th=[38011] 00:16:07.192 write: IOPS=4655, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1005msec); 0 zone resets 00:16:07.192 slat (usec): min=2, max=16393, avg=98.51, stdev=680.11 00:16:07.192 clat (usec): min=1087, max=37078, avg=13126.81, stdev=4045.30 00:16:07.192 lat (usec): min=1098, max=37090, avg=13225.32, stdev=4119.17 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[11076], 00:16:07.192 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:16:07.192 | 70.00th=[11863], 80.00th=[15533], 90.00th=[20841], 95.00th=[21890], 00:16:07.192 | 99.00th=[22152], 99.50th=[22938], 99.90th=[34341], 99.95th=[36963], 00:16:07.192 | 99.99th=[36963] 00:16:07.192 bw ( KiB/s): min=16384, max=20480, per=25.87%, avg=18432.00, stdev=2896.31, samples=2 00:16:07.192 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:16:07.192 lat (msec) : 2=0.03%, 10=8.95%, 20=72.74%, 50=18.28% 00:16:07.192 cpu : usr=4.78%, sys=5.28%, ctx=338, majf=0, minf=1 00:16:07.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:07.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.192 issued rwts: total=4608,4679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.192 00:16:07.192 Run status group 0 (all jobs): 00:16:07.192 READ: bw=67.3MiB/s (70.6MB/s), 11.7MiB/s-21.6MiB/s (12.3MB/s-22.7MB/s), io=68.0MiB (71.3MB), run=1005-1010msec 00:16:07.192 WRITE: bw=69.6MiB/s (73.0MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.9MB/s), io=70.3MiB (73.7MB), run=1005-1010msec 00:16:07.192 00:16:07.192 Disk stats (read/write): 00:16:07.192 nvme0n1: ios=3618/4023, merge=0/0, ticks=45263/52382, in_queue=97645, util=97.39% 00:16:07.192 nvme0n2: ios=2077/2559, merge=0/0, ticks=36645/62829, in_queue=99474, util=96.82% 00:16:07.192 nvme0n3: ios=4326/4608, merge=0/0, ticks=47478/42783, in_queue=90261, util=99.78% 00:16:07.192 nvme0n4: ios=3515/3584, merge=0/0, ticks=36595/32902, in_queue=69497, util=89.20% 00:16:07.192 12:50:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:07.192 [global] 00:16:07.192 thread=1 00:16:07.192 invalidate=1 00:16:07.192 rw=randwrite 00:16:07.192 time_based=1 00:16:07.192 runtime=1 00:16:07.192 ioengine=libaio 00:16:07.192 direct=1 00:16:07.192 bs=4096 00:16:07.192 iodepth=128 00:16:07.192 norandommap=0 00:16:07.192 numjobs=1 00:16:07.192 00:16:07.192 verify_dump=1 00:16:07.192 verify_backlog=512 00:16:07.192 verify_state_save=0 00:16:07.192 do_verify=1 00:16:07.192 verify=crc32c-intel 00:16:07.192 [job0] 00:16:07.192 filename=/dev/nvme0n1 00:16:07.192 [job1] 00:16:07.192 filename=/dev/nvme0n2 00:16:07.192 [job2] 00:16:07.192 filename=/dev/nvme0n3 00:16:07.192 [job3] 00:16:07.192 filename=/dev/nvme0n4 00:16:07.192 Could not set queue depth (nvme0n1) 00:16:07.192 Could not set queue depth (nvme0n2) 00:16:07.192 Could not set queue depth (nvme0n3) 00:16:07.192 Could not set queue depth (nvme0n4) 00:16:07.457 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.457 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.457 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.457 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.457 fio-3.35 00:16:07.457 Starting 4 threads 00:16:08.837 00:16:08.837 job0: (groupid=0, jobs=1): err= 0: pid=1702710: Mon Jul 15 12:50:39 2024 00:16:08.837 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:16:08.837 slat (nsec): min=1362, max=10429k, avg=117149.17, stdev=807069.06 00:16:08.838 clat (usec): min=3796, max=54049, avg=12567.25, stdev=5657.58 00:16:08.838 lat (usec): min=4428, max=54055, avg=12684.40, stdev=5765.51 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 6390], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9765], 00:16:08.838 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11076], 60.00th=[11469], 00:16:08.838 | 70.00th=[11600], 80.00th=[12780], 90.00th=[17171], 95.00th=[22938], 00:16:08.838 | 99.00th=[42730], 99.50th=[45351], 99.90th=[54264], 99.95th=[54264], 00:16:08.838 | 99.99th=[54264] 00:16:08.838 write: IOPS=3938, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1009msec); 0 zone resets 00:16:08.838 slat (usec): min=2, max=19844, avg=135.29, stdev=798.10 00:16:08.838 clat (usec): min=849, max=101641, avg=20831.76, stdev=18260.02 00:16:08.838 lat (usec): min=862, max=101653, avg=20967.05, stdev=18345.43 00:16:08.838 clat percentiles (msec): 00:16:08.838 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:16:08.838 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 20], 00:16:08.838 | 70.00th=[ 22], 80.00th=[ 29], 90.00th=[ 50], 95.00th=[ 56], 00:16:08.838 | 99.00th=[ 93], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:16:08.838 | 99.99th=[ 102] 00:16:08.838 bw ( KiB/s): min=14392, max=16384, per=21.33%, avg=15388.00, stdev=1408.56, samples=2 00:16:08.838 iops : min= 3598, max= 4096, avg=3847.00, stdev=352.14, samples=2 00:16:08.838 lat (usec) : 1000=0.04% 00:16:08.838 lat (msec) : 2=0.11%, 4=0.83%, 10=24.77%, 20=52.94%, 50=15.98% 00:16:08.838 lat (msec) : 100=5.13%, 250=0.20% 00:16:08.838 cpu : usr=3.37%, sys=4.56%, ctx=451, majf=0, minf=1 00:16:08.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:08.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.838 issued rwts: total=3584,3974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.838 job1: (groupid=0, jobs=1): err= 0: pid=1702725: Mon Jul 15 12:50:39 2024 00:16:08.838 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:16:08.838 slat (nsec): min=1189, max=38006k, avg=100884.39, stdev=915180.95 00:16:08.838 clat (usec): min=3657, max=66151, avg=13001.39, stdev=8154.09 00:16:08.838 lat (usec): min=3664, max=66177, avg=13102.27, stdev=8211.65 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 3884], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9765], 00:16:08.838 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:16:08.838 | 70.00th=[11863], 80.00th=[13435], 90.00th=[17695], 95.00th=[22414], 00:16:08.838 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:16:08.838 | 99.99th=[66323] 00:16:08.838 write: IOPS=4456, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1004msec); 0 zone resets 00:16:08.838 slat (usec): min=2, max=9277, avg=111.27, stdev=714.90 00:16:08.838 clat (usec): min=397, max=92118, avg=16563.01, stdev=16676.34 00:16:08.838 lat (usec): min=412, max=92130, avg=16674.28, stdev=16782.78 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 1303], 5.00th=[ 4080], 10.00th=[ 6063], 20.00th=[ 7832], 00:16:08.838 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11076], 00:16:08.838 | 70.00th=[15533], 80.00th=[19792], 90.00th=[40633], 95.00th=[54789], 00:16:08.838 | 99.00th=[86508], 99.50th=[87557], 99.90th=[91751], 99.95th=[91751], 00:16:08.838 | 99.99th=[91751] 00:16:08.838 bw ( KiB/s): min=13736, max=21040, per=24.10%, avg=17388.00, stdev=5164.71, samples=2 00:16:08.838 iops : min= 3434, max= 5260, avg=4347.00, stdev=1291.18, samples=2 00:16:08.838 lat (usec) : 500=0.01%, 1000=0.49% 00:16:08.838 lat (msec) : 2=0.32%, 4=2.22%, 10=34.24%, 20=50.99%, 50=7.13% 00:16:08.838 lat (msec) : 100=4.61% 00:16:08.838 cpu : usr=2.99%, sys=5.68%, ctx=432, majf=0, minf=1 00:16:08.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:08.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.838 issued rwts: total=4096,4474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.838 job2: (groupid=0, jobs=1): err= 0: pid=1702744: Mon Jul 15 12:50:39 2024 00:16:08.838 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:16:08.838 slat (nsec): min=1329, max=11192k, avg=105494.77, stdev=620894.47 00:16:08.838 clat (usec): min=3899, max=66442, avg=12692.04, stdev=5363.45 00:16:08.838 lat (usec): min=3902, max=66446, avg=12797.53, stdev=5424.64 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:16:08.838 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:16:08.838 | 70.00th=[12387], 80.00th=[13173], 90.00th=[15664], 95.00th=[16057], 00:16:08.838 | 99.00th=[42206], 99.50th=[55313], 99.90th=[66323], 99.95th=[66323], 00:16:08.838 | 99.99th=[66323] 00:16:08.838 write: IOPS=4617, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:16:08.838 slat (usec): min=2, max=24560, avg=105.09, stdev=664.87 00:16:08.838 clat (usec): min=1554, max=78376, avg=14784.78, stdev=10184.31 00:16:08.838 lat (usec): min=3046, max=78386, avg=14889.88, stdev=10223.26 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 4948], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11076], 00:16:08.838 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:16:08.838 | 70.00th=[12125], 80.00th=[12911], 90.00th=[21627], 95.00th=[44303], 00:16:08.838 | 99.00th=[56361], 99.50th=[66323], 99.90th=[77071], 99.95th=[78119], 00:16:08.838 | 99.99th=[78119] 00:16:08.838 bw ( KiB/s): min=16384, max=20480, per=25.55%, avg=18432.00, stdev=2896.31, samples=2 00:16:08.838 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:16:08.838 lat (msec) : 2=0.01%, 4=0.55%, 10=9.44%, 20=81.16%, 50=7.30% 00:16:08.838 lat (msec) : 100=1.55% 00:16:08.838 cpu : usr=3.59%, sys=5.39%, ctx=482, majf=0, minf=1 00:16:08.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:08.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.838 issued rwts: total=4608,4631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.838 job3: (groupid=0, jobs=1): err= 0: pid=1702749: Mon Jul 15 12:50:39 2024 00:16:08.838 read: IOPS=5033, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1007msec) 00:16:08.838 slat (nsec): min=1391, max=9763.5k, avg=98088.05, stdev=568938.87 00:16:08.838 clat (usec): min=5666, max=44330, avg=12225.49, stdev=2849.96 00:16:08.838 lat (usec): min=5669, max=44333, avg=12323.57, stdev=2884.67 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 6587], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10945], 00:16:08.838 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:16:08.838 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14484], 95.00th=[16319], 00:16:08.838 | 99.00th=[24249], 99.50th=[25822], 99.90th=[31065], 99.95th=[44303], 00:16:08.838 | 99.99th=[44303] 00:16:08.838 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:08.838 slat (usec): min=2, max=10074, avg=88.48, stdev=472.73 00:16:08.838 clat (usec): min=587, max=54977, avg=12840.56, stdev=5820.45 00:16:08.838 lat (usec): min=721, max=54981, avg=12929.05, stdev=5851.78 00:16:08.838 clat percentiles (usec): 00:16:08.838 | 1.00th=[ 6063], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[11076], 00:16:08.838 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:16:08.838 | 70.00th=[11731], 80.00th=[12256], 90.00th=[14484], 95.00th=[25822], 00:16:08.838 | 99.00th=[40109], 99.50th=[43779], 99.90th=[54789], 99.95th=[54789], 00:16:08.838 | 99.99th=[54789] 00:16:08.838 bw ( KiB/s): min=20480, max=20480, per=28.39%, avg=20480.00, stdev= 0.00, samples=2 00:16:08.838 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:08.838 lat (usec) : 750=0.01% 00:16:08.838 lat (msec) : 2=0.09%, 10=12.30%, 20=83.54%, 50=3.95%, 100=0.12% 00:16:08.838 cpu : usr=3.08%, sys=5.57%, ctx=631, majf=0, minf=1 00:16:08.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:08.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.838 issued rwts: total=5069,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.838 00:16:08.838 Run status group 0 (all jobs): 00:16:08.838 READ: bw=67.2MiB/s (70.5MB/s), 13.9MiB/s-19.7MiB/s (14.5MB/s-20.6MB/s), io=67.8MiB (71.1MB), run=1003-1009msec 00:16:08.838 WRITE: bw=70.5MiB/s (73.9MB/s), 15.4MiB/s-19.9MiB/s (16.1MB/s-20.8MB/s), io=71.1MiB (74.5MB), run=1003-1009msec 00:16:08.838 00:16:08.838 Disk stats (read/write): 00:16:08.838 nvme0n1: ios=3090/3375, merge=0/0, ticks=37981/62345, in_queue=100326, util=85.77% 00:16:08.838 nvme0n2: ios=3617/3584, merge=0/0, ticks=41438/56153, in_queue=97591, util=90.56% 00:16:08.838 nvme0n3: ios=3607/4039, merge=0/0, ticks=27158/37227, in_queue=64385, util=92.94% 00:16:08.838 nvme0n4: ios=4405/4608, merge=0/0, ticks=23159/25041, in_queue=48200, util=93.82% 00:16:08.838 12:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:08.838 12:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1702905 00:16:08.838 12:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:08.838 12:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:08.838 [global] 00:16:08.838 thread=1 00:16:08.838 invalidate=1 00:16:08.838 rw=read 00:16:08.838 time_based=1 00:16:08.838 runtime=10 00:16:08.838 ioengine=libaio 00:16:08.838 direct=1 00:16:08.838 bs=4096 00:16:08.838 iodepth=1 00:16:08.838 norandommap=1 00:16:08.838 numjobs=1 00:16:08.838 00:16:08.838 [job0] 00:16:08.838 filename=/dev/nvme0n1 00:16:08.838 [job1] 00:16:08.838 filename=/dev/nvme0n2 00:16:08.838 [job2] 00:16:08.838 filename=/dev/nvme0n3 00:16:08.838 [job3] 00:16:08.838 filename=/dev/nvme0n4 00:16:08.838 Could not set queue depth (nvme0n1) 00:16:08.839 Could not set queue depth (nvme0n2) 00:16:08.839 Could not set queue depth (nvme0n3) 00:16:08.839 Could not set queue depth (nvme0n4) 00:16:08.839 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.839 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.839 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.839 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.839 fio-3.35 00:16:08.839 Starting 4 threads 00:16:12.125 12:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:12.125 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=37220352, buflen=4096 00:16:12.125 fio: pid=1703195, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:12.125 12:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:12.125 12:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.125 12:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:12.125 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=299008, buflen=4096 00:16:12.125 fio: pid=1703189, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:12.125 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=46702592, buflen=4096 00:16:12.125 fio: pid=1703159, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:12.125 12:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.125 12:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:12.384 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=327680, buflen=4096 00:16:12.384 fio: pid=1703172, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:12.384 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.384 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:12.384 00:16:12.384 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1703159: Mon Jul 15 12:50:43 2024 00:16:12.384 read: IOPS=3696, BW=14.4MiB/s (15.1MB/s)(44.5MiB/3085msec) 00:16:12.384 slat (usec): min=2, max=16405, avg= 9.68, stdev=173.88 00:16:12.384 clat (usec): min=181, max=41888, avg=257.17, stdev=665.00 00:16:12.384 lat (usec): min=184, max=41898, avg=266.85, stdev=687.56 00:16:12.384 clat percentiles (usec): 00:16:12.384 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 235], 00:16:12.384 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:16:12.384 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:16:12.384 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 519], 99.95th=[ 725], 00:16:12.384 | 99.99th=[41157] 00:16:12.384 bw ( KiB/s): min=11312, max=15576, per=58.20%, avg=14625.60, stdev=1858.30, samples=5 00:16:12.384 iops : min= 2828, max= 3894, avg=3656.40, stdev=464.58, samples=5 00:16:12.384 lat (usec) : 250=59.55%, 500=40.33%, 750=0.07%, 1000=0.01% 00:16:12.384 lat (msec) : 2=0.01%, 50=0.03% 00:16:12.384 cpu : usr=1.98%, sys=5.45%, ctx=11406, majf=0, minf=1 00:16:12.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.384 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.384 issued rwts: total=11403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.384 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1703172: Mon Jul 15 12:50:43 2024 00:16:12.384 read: IOPS=24, BW=97.4KiB/s (99.7kB/s)(320KiB/3286msec) 00:16:12.384 slat (usec): min=7, max=6759, avg=100.96, stdev=749.10 00:16:12.384 clat (usec): min=402, max=42047, avg=40699.37, stdev=4581.38 00:16:12.384 lat (usec): min=431, max=47946, avg=40801.31, stdev=4650.67 00:16:12.385 clat percentiles (usec): 00:16:12.385 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:12.385 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:12.385 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:12.385 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:12.385 | 99.99th=[42206] 00:16:12.385 bw ( KiB/s): min= 96, max= 104, per=0.39%, avg=97.83, stdev= 3.25, samples=6 00:16:12.385 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:16:12.385 lat (usec) : 500=1.23% 00:16:12.385 lat (msec) : 50=97.53% 00:16:12.385 cpu : usr=0.00%, sys=0.06%, ctx=85, majf=0, minf=1 00:16:12.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.385 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.385 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.385 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1703189: Mon Jul 15 12:50:43 2024 00:16:12.385 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2934msec) 00:16:12.385 slat (usec): min=7, max=10825, avg=166.58, stdev=1256.10 00:16:12.385 clat (usec): min=331, max=42074, avg=39667.00, stdev=8198.88 00:16:12.385 lat (usec): min=343, max=52042, avg=39835.56, stdev=8323.53 00:16:12.385 clat percentiles (usec): 00:16:12.385 | 1.00th=[ 330], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:12.385 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:12.385 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:12.385 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:12.385 | 99.99th=[42206] 00:16:12.385 bw ( KiB/s): min= 96, max= 112, per=0.40%, avg=100.80, stdev= 7.16, samples=5 00:16:12.385 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:16:12.385 lat (usec) : 500=2.70%, 750=1.35% 00:16:12.385 lat (msec) : 50=94.59% 00:16:12.385 cpu : usr=0.00%, sys=0.07%, ctx=76, majf=0, minf=1 00:16:12.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.385 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.385 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.385 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1703195: Mon Jul 15 12:50:43 2024 00:16:12.385 read: IOPS=3362, BW=13.1MiB/s (13.8MB/s)(35.5MiB/2703msec) 00:16:12.385 slat (nsec): min=6387, max=41371, avg=7462.55, stdev=1026.33 00:16:12.385 clat (usec): min=214, max=870, avg=286.42, stdev=19.52 00:16:12.385 lat (usec): min=221, max=880, avg=293.88, stdev=19.58 00:16:12.385 clat percentiles (usec): 00:16:12.385 | 1.00th=[ 243], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:16:12.385 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:16:12.385 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:16:12.385 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 449], 99.95th=[ 498], 00:16:12.385 | 99.99th=[ 873] 00:16:12.385 bw ( KiB/s): min=13488, max=13736, per=54.02%, avg=13574.40, stdev=112.20, samples=5 00:16:12.385 iops : min= 3372, max= 3434, avg=3393.60, stdev=28.05, samples=5 00:16:12.385 lat (usec) : 250=1.85%, 500=98.10%, 750=0.02%, 1000=0.02% 00:16:12.385 cpu : usr=0.93%, sys=3.22%, ctx=9092, majf=0, minf=2 00:16:12.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.385 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.385 issued rwts: total=9088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.385 00:16:12.385 Run status group 0 (all jobs): 00:16:12.385 READ: bw=24.5MiB/s (25.7MB/s), 97.4KiB/s-14.4MiB/s (99.7kB/s-15.1MB/s), io=80.6MiB (84.5MB), run=2703-3286msec 00:16:12.385 00:16:12.385 Disk stats (read/write): 00:16:12.385 nvme0n1: ios=10432/0, merge=0/0, ticks=2622/0, in_queue=2622, util=94.52% 00:16:12.385 nvme0n2: ios=115/0, merge=0/0, ticks=4102/0, in_queue=4102, util=99.50% 00:16:12.385 nvme0n3: ios=71/0, merge=0/0, ticks=2815/0, in_queue=2815, util=96.18% 00:16:12.385 nvme0n4: ios=8870/0, merge=0/0, ticks=3416/0, in_queue=3416, util=99.04% 00:16:12.645 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.645 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:12.645 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.645 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:12.904 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.904 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:13.163 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.163 12:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1702905 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:13.422 nvmf hotplug test: fio failed as expected 00:16:13.422 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.682 rmmod nvme_tcp 00:16:13.682 rmmod nvme_fabrics 00:16:13.682 rmmod nvme_keyring 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1700198 ']' 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1700198 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1700198 ']' 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1700198 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1700198 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1700198' 00:16:13.682 killing process with pid 1700198 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1700198 00:16:13.682 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1700198 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.942 12:50:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.479 12:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:16.479 00:16:16.479 real 0m26.812s 00:16:16.479 user 1m46.776s 00:16:16.479 sys 0m8.236s 00:16:16.479 12:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.479 12:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.479 ************************************ 00:16:16.479 END TEST nvmf_fio_target 00:16:16.479 ************************************ 00:16:16.479 12:50:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:16.479 12:50:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.479 12:50:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:16.479 12:50:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.479 12:50:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.479 ************************************ 00:16:16.479 START TEST nvmf_bdevio 00:16:16.479 ************************************ 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.479 * Looking for test storage... 00:16:16.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.479 12:50:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.479 12:50:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.480 12:50:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.854 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:21.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:21.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:21.855 Found net devices under 0000:86:00.0: cvl_0_0 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:21.855 Found net devices under 0000:86:00.1: cvl_0_1 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:16:21.855 00:16:21.855 --- 10.0.0.2 ping statistics --- 00:16:21.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.855 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:16:21.855 00:16:21.855 --- 10.0.0.1 ping statistics --- 00:16:21.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.855 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1707506 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1707506 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1707506 ']' 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.855 12:50:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:22.115 [2024-07-15 12:50:52.828548] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:22.115 [2024-07-15 12:50:52.828591] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.115 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.115 [2024-07-15 12:50:52.896936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.115 [2024-07-15 12:50:52.976401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.115 [2024-07-15 12:50:52.976435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.115 [2024-07-15 12:50:52.976442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.115 [2024-07-15 12:50:52.976448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.115 [2024-07-15 12:50:52.976453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.115 [2024-07-15 12:50:52.976574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:22.115 [2024-07-15 12:50:52.976688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:22.115 [2024-07-15 12:50:52.976794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:22.115 [2024-07-15 12:50:52.976799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.051 [2024-07-15 12:50:53.682016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.051 Malloc0 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.051 [2024-07-15 12:50:53.733653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:23.051 { 00:16:23.051 "params": { 00:16:23.051 "name": "Nvme$subsystem", 00:16:23.051 "trtype": "$TEST_TRANSPORT", 00:16:23.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.051 "adrfam": "ipv4", 00:16:23.051 "trsvcid": "$NVMF_PORT", 00:16:23.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.051 "hdgst": ${hdgst:-false}, 00:16:23.051 "ddgst": ${ddgst:-false} 00:16:23.051 }, 00:16:23.051 "method": "bdev_nvme_attach_controller" 00:16:23.051 } 00:16:23.051 EOF 00:16:23.051 )") 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:23.051 12:50:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:23.051 "params": { 00:16:23.051 "name": "Nvme1", 00:16:23.051 "trtype": "tcp", 00:16:23.051 "traddr": "10.0.0.2", 00:16:23.051 "adrfam": "ipv4", 00:16:23.051 "trsvcid": "4420", 00:16:23.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.051 "hdgst": false, 00:16:23.051 "ddgst": false 00:16:23.051 }, 00:16:23.051 "method": "bdev_nvme_attach_controller" 00:16:23.051 }' 00:16:23.051 [2024-07-15 12:50:53.783461] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:23.052 [2024-07-15 12:50:53.783505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707544 ] 00:16:23.052 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.052 [2024-07-15 12:50:53.849129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:23.052 [2024-07-15 12:50:53.924686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.052 [2024-07-15 12:50:53.924792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.052 [2024-07-15 12:50:53.924792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.310 I/O targets: 00:16:23.310 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:23.310 00:16:23.310 00:16:23.310 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.310 http://cunit.sourceforge.net/ 00:16:23.310 00:16:23.310 00:16:23.310 Suite: bdevio tests on: Nvme1n1 00:16:23.310 Test: blockdev write read block ...passed 00:16:23.310 Test: blockdev write zeroes read block ...passed 00:16:23.310 Test: blockdev write zeroes read no split ...passed 00:16:23.568 Test: blockdev write zeroes read split ...passed 00:16:23.568 Test: blockdev write zeroes read split partial ...passed 00:16:23.568 Test: blockdev reset ...[2024-07-15 12:50:54.324759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:23.568 [2024-07-15 12:50:54.324827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af66d0 (9): Bad file descriptor 00:16:23.568 [2024-07-15 12:50:54.341670] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:23.568 passed 00:16:23.568 Test: blockdev write read 8 blocks ...passed 00:16:23.568 Test: blockdev write read size > 128k ...passed 00:16:23.568 Test: blockdev write read invalid size ...passed 00:16:23.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:23.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:23.568 Test: blockdev write read max offset ...passed 00:16:23.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:23.568 Test: blockdev writev readv 8 blocks ...passed 00:16:23.568 Test: blockdev writev readv 30 x 1block ...passed 00:16:23.826 Test: blockdev writev readv block ...passed 00:16:23.826 Test: blockdev writev readv size > 128k ...passed 00:16:23.826 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:23.826 Test: blockdev comparev and writev ...[2024-07-15 12:50:54.552307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.552336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.552350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.552358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.552665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.552678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.552686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.552982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.553000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.553012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.553021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.553289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.553313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.826 [2024-07-15 12:50:54.553321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:23.826 passed 00:16:23.826 Test: blockdev nvme passthru rw ...passed 00:16:23.826 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:50:54.635703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.826 [2024-07-15 12:50:54.635719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.635870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.826 [2024-07-15 12:50:54.635880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.636036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.826 [2024-07-15 12:50:54.636045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:23.826 [2024-07-15 12:50:54.636196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.826 [2024-07-15 12:50:54.636206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:23.826 passed 00:16:23.826 Test: blockdev nvme admin passthru ...passed 00:16:23.826 Test: blockdev copy ...passed 00:16:23.826 00:16:23.826 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.826 suites 1 1 n/a 0 0 00:16:23.826 tests 23 23 23 0 0 00:16:23.826 asserts 152 152 152 0 n/a 00:16:23.826 00:16:23.826 Elapsed time = 1.157 seconds 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.085 rmmod nvme_tcp 00:16:24.085 rmmod nvme_fabrics 00:16:24.085 rmmod nvme_keyring 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1707506 ']' 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1707506 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1707506 ']' 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1707506 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1707506 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1707506' 00:16:24.085 killing process with pid 1707506 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1707506 00:16:24.085 12:50:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1707506 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.344 12:50:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.879 12:50:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.879 00:16:26.879 real 0m10.373s 00:16:26.879 user 0m12.583s 00:16:26.879 sys 0m4.869s 00:16:26.879 12:50:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.879 12:50:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:26.879 ************************************ 00:16:26.879 END TEST nvmf_bdevio 00:16:26.879 ************************************ 00:16:26.879 12:50:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.879 12:50:57 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:26.879 12:50:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.879 12:50:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.879 12:50:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.879 ************************************ 00:16:26.879 START TEST nvmf_auth_target 00:16:26.879 ************************************ 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:26.879 * Looking for test storage... 00:16:26.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.879 12:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.880 12:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.880 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.880 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.880 12:50:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.880 12:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:32.148 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:32.148 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:32.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:32.149 Found net devices under 0000:86:00.0: cvl_0_0 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:32.149 Found net devices under 0000:86:00.1: cvl_0_1 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.149 12:51:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.149 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.149 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.149 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:32.149 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:32.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:16:32.418 00:16:32.418 --- 10.0.0.2 ping statistics --- 00:16:32.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.418 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:16:32.418 00:16:32.418 --- 10.0.0.1 ping statistics --- 00:16:32.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.418 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1711414 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1711414 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1711414 ']' 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.418 12:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1711652 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=79868fcea53f869940e90bb51c58105a597174d9040a795a 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.E6Z 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 79868fcea53f869940e90bb51c58105a597174d9040a795a 0 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 79868fcea53f869940e90bb51c58105a597174d9040a795a 0 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=79868fcea53f869940e90bb51c58105a597174d9040a795a 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.E6Z 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.E6Z 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.E6Z 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=045581eea3e2d30515f71d5f145eec73171867ef2266e7416222fe6d18b927b0 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fU1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 045581eea3e2d30515f71d5f145eec73171867ef2266e7416222fe6d18b927b0 3 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 045581eea3e2d30515f71d5f145eec73171867ef2266e7416222fe6d18b927b0 3 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=045581eea3e2d30515f71d5f145eec73171867ef2266e7416222fe6d18b927b0 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fU1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fU1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.fU1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=96c1e07c182ab8e0ba96db2979876dfe 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VR8 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 96c1e07c182ab8e0ba96db2979876dfe 1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 96c1e07c182ab8e0ba96db2979876dfe 1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=96c1e07c182ab8e0ba96db2979876dfe 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VR8 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VR8 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.VR8 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:33.354 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=769f0fa13ced3c00c9371dc6f0bcaedb6b09cfd3d40591c0 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mNA 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 769f0fa13ced3c00c9371dc6f0bcaedb6b09cfd3d40591c0 2 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 769f0fa13ced3c00c9371dc6f0bcaedb6b09cfd3d40591c0 2 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=769f0fa13ced3c00c9371dc6f0bcaedb6b09cfd3d40591c0 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mNA 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mNA 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.mNA 00:16:33.612 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bb2271309fc83e2090cbc44eea65f73683afc159023a1ca8 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YAP 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bb2271309fc83e2090cbc44eea65f73683afc159023a1ca8 2 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bb2271309fc83e2090cbc44eea65f73683afc159023a1ca8 2 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bb2271309fc83e2090cbc44eea65f73683afc159023a1ca8 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YAP 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YAP 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.YAP 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fec295a1838f6b52724a60d3c3fcc732 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4pW 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fec295a1838f6b52724a60d3c3fcc732 1 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fec295a1838f6b52724a60d3c3fcc732 1 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fec295a1838f6b52724a60d3c3fcc732 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4pW 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4pW 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4pW 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e30794360d84c0a6706cb59aa5448183d1c5bb4384e898015f420843a452895d 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Rjj 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e30794360d84c0a6706cb59aa5448183d1c5bb4384e898015f420843a452895d 3 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e30794360d84c0a6706cb59aa5448183d1c5bb4384e898015f420843a452895d 3 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e30794360d84c0a6706cb59aa5448183d1c5bb4384e898015f420843a452895d 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Rjj 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Rjj 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Rjj 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1711414 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1711414 ']' 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.613 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1711652 /var/tmp/host.sock 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1711652 ']' 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:33.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.872 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.E6Z 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.E6Z 00:16:34.131 12:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.E6Z 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.fU1 ]] 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fU1 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fU1 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fU1 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VR8 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VR8 00:16:34.390 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VR8 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.mNA ]] 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mNA 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mNA 00:16:34.649 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mNA 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YAP 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YAP 00:16:34.908 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YAP 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4pW ]] 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4pW 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4pW 00:16:35.166 12:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4pW 00:16:35.166 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:35.166 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Rjj 00:16:35.166 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.166 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.166 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.167 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Rjj 00:16:35.167 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Rjj 00:16:35.425 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:35.425 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:35.425 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.425 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.425 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.425 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.684 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.943 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.943 { 00:16:35.943 "cntlid": 1, 00:16:35.943 "qid": 0, 00:16:35.943 "state": "enabled", 00:16:35.943 "thread": "nvmf_tgt_poll_group_000", 00:16:35.943 "listen_address": { 00:16:35.943 "trtype": "TCP", 00:16:35.943 "adrfam": "IPv4", 00:16:35.943 "traddr": "10.0.0.2", 00:16:35.943 "trsvcid": "4420" 00:16:35.943 }, 00:16:35.943 "peer_address": { 00:16:35.943 "trtype": "TCP", 00:16:35.943 "adrfam": "IPv4", 00:16:35.943 "traddr": "10.0.0.1", 00:16:35.943 "trsvcid": "33502" 00:16:35.943 }, 00:16:35.943 "auth": { 00:16:35.943 "state": "completed", 00:16:35.943 "digest": "sha256", 00:16:35.943 "dhgroup": "null" 00:16:35.943 } 00:16:35.943 } 00:16:35.943 ]' 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.943 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.203 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.203 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.203 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.203 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.203 12:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.462 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.032 12:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.290 00:16:37.290 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.290 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.290 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.549 { 00:16:37.549 "cntlid": 3, 00:16:37.549 "qid": 0, 00:16:37.549 "state": "enabled", 00:16:37.549 "thread": "nvmf_tgt_poll_group_000", 00:16:37.549 "listen_address": { 00:16:37.549 "trtype": "TCP", 00:16:37.549 "adrfam": "IPv4", 00:16:37.549 "traddr": "10.0.0.2", 00:16:37.549 "trsvcid": "4420" 00:16:37.549 }, 00:16:37.549 "peer_address": { 00:16:37.549 "trtype": "TCP", 00:16:37.549 "adrfam": "IPv4", 00:16:37.549 "traddr": "10.0.0.1", 00:16:37.549 "trsvcid": "40746" 00:16:37.549 }, 00:16:37.549 "auth": { 00:16:37.549 "state": "completed", 00:16:37.549 "digest": "sha256", 00:16:37.549 "dhgroup": "null" 00:16:37.549 } 00:16:37.549 } 00:16:37.549 ]' 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.549 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.550 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:37.550 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.550 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.550 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.550 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.809 12:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.376 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.634 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.893 00:16:38.893 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.893 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.893 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.893 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.152 { 00:16:39.152 "cntlid": 5, 00:16:39.152 "qid": 0, 00:16:39.152 "state": "enabled", 00:16:39.152 "thread": "nvmf_tgt_poll_group_000", 00:16:39.152 "listen_address": { 00:16:39.152 "trtype": "TCP", 00:16:39.152 "adrfam": "IPv4", 00:16:39.152 "traddr": "10.0.0.2", 00:16:39.152 "trsvcid": "4420" 00:16:39.152 }, 00:16:39.152 "peer_address": { 00:16:39.152 "trtype": "TCP", 00:16:39.152 "adrfam": "IPv4", 00:16:39.152 "traddr": "10.0.0.1", 00:16:39.152 "trsvcid": "40784" 00:16:39.152 }, 00:16:39.152 "auth": { 00:16:39.152 "state": "completed", 00:16:39.152 "digest": "sha256", 00:16:39.152 "dhgroup": "null" 00:16:39.152 } 00:16:39.152 } 00:16:39.152 ]' 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.152 12:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.411 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.979 12:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.237 00:16:40.237 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.238 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.238 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.496 { 00:16:40.496 "cntlid": 7, 00:16:40.496 "qid": 0, 00:16:40.496 "state": "enabled", 00:16:40.496 "thread": "nvmf_tgt_poll_group_000", 00:16:40.496 "listen_address": { 00:16:40.496 "trtype": "TCP", 00:16:40.496 "adrfam": "IPv4", 00:16:40.496 "traddr": "10.0.0.2", 00:16:40.496 "trsvcid": "4420" 00:16:40.496 }, 00:16:40.496 "peer_address": { 00:16:40.496 "trtype": "TCP", 00:16:40.496 "adrfam": "IPv4", 00:16:40.496 "traddr": "10.0.0.1", 00:16:40.496 "trsvcid": "40814" 00:16:40.496 }, 00:16:40.496 "auth": { 00:16:40.496 "state": "completed", 00:16:40.496 "digest": "sha256", 00:16:40.496 "dhgroup": "null" 00:16:40.496 } 00:16:40.496 } 00:16:40.496 ]' 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:40.496 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.759 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.759 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.759 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.759 12:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.326 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.584 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.585 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.844 00:16:41.844 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.844 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.844 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.102 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.102 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.102 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.102 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.102 12:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.102 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.102 { 00:16:42.102 "cntlid": 9, 00:16:42.102 "qid": 0, 00:16:42.102 "state": "enabled", 00:16:42.102 "thread": "nvmf_tgt_poll_group_000", 00:16:42.102 "listen_address": { 00:16:42.102 "trtype": "TCP", 00:16:42.102 "adrfam": "IPv4", 00:16:42.102 "traddr": "10.0.0.2", 00:16:42.102 "trsvcid": "4420" 00:16:42.102 }, 00:16:42.102 "peer_address": { 00:16:42.102 "trtype": "TCP", 00:16:42.102 "adrfam": "IPv4", 00:16:42.102 "traddr": "10.0.0.1", 00:16:42.102 "trsvcid": "40844" 00:16:42.102 }, 00:16:42.102 "auth": { 00:16:42.102 "state": "completed", 00:16:42.102 "digest": "sha256", 00:16:42.102 "dhgroup": "ffdhe2048" 00:16:42.102 } 00:16:42.102 } 00:16:42.102 ]' 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.103 12:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.363 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.934 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.192 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:43.192 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.192 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.193 12:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.451 00:16:43.451 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.452 { 00:16:43.452 "cntlid": 11, 00:16:43.452 "qid": 0, 00:16:43.452 "state": "enabled", 00:16:43.452 "thread": "nvmf_tgt_poll_group_000", 00:16:43.452 "listen_address": { 00:16:43.452 "trtype": "TCP", 00:16:43.452 "adrfam": "IPv4", 00:16:43.452 "traddr": "10.0.0.2", 00:16:43.452 "trsvcid": "4420" 00:16:43.452 }, 00:16:43.452 "peer_address": { 00:16:43.452 "trtype": "TCP", 00:16:43.452 "adrfam": "IPv4", 00:16:43.452 "traddr": "10.0.0.1", 00:16:43.452 "trsvcid": "40880" 00:16:43.452 }, 00:16:43.452 "auth": { 00:16:43.452 "state": "completed", 00:16:43.452 "digest": "sha256", 00:16:43.452 "dhgroup": "ffdhe2048" 00:16:43.452 } 00:16:43.452 } 00:16:43.452 ]' 00:16:43.452 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.711 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.970 12:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.539 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.540 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.798 00:16:44.798 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.798 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.798 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.056 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.056 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.056 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.056 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.056 12:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.056 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.056 { 00:16:45.056 "cntlid": 13, 00:16:45.056 "qid": 0, 00:16:45.056 "state": "enabled", 00:16:45.056 "thread": "nvmf_tgt_poll_group_000", 00:16:45.056 "listen_address": { 00:16:45.057 "trtype": "TCP", 00:16:45.057 "adrfam": "IPv4", 00:16:45.057 "traddr": "10.0.0.2", 00:16:45.057 "trsvcid": "4420" 00:16:45.057 }, 00:16:45.057 "peer_address": { 00:16:45.057 "trtype": "TCP", 00:16:45.057 "adrfam": "IPv4", 00:16:45.057 "traddr": "10.0.0.1", 00:16:45.057 "trsvcid": "40908" 00:16:45.057 }, 00:16:45.057 "auth": { 00:16:45.057 "state": "completed", 00:16:45.057 "digest": "sha256", 00:16:45.057 "dhgroup": "ffdhe2048" 00:16:45.057 } 00:16:45.057 } 00:16:45.057 ]' 00:16:45.057 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.057 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.057 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.057 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.057 12:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.057 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.057 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.057 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.316 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.883 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.884 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.142 12:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.465 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.465 { 00:16:46.465 "cntlid": 15, 00:16:46.465 "qid": 0, 00:16:46.465 "state": "enabled", 00:16:46.465 "thread": "nvmf_tgt_poll_group_000", 00:16:46.465 "listen_address": { 00:16:46.465 "trtype": "TCP", 00:16:46.465 "adrfam": "IPv4", 00:16:46.465 "traddr": "10.0.0.2", 00:16:46.465 "trsvcid": "4420" 00:16:46.465 }, 00:16:46.465 "peer_address": { 00:16:46.465 "trtype": "TCP", 00:16:46.465 "adrfam": "IPv4", 00:16:46.465 "traddr": "10.0.0.1", 00:16:46.465 "trsvcid": "36994" 00:16:46.465 }, 00:16:46.465 "auth": { 00:16:46.465 "state": "completed", 00:16:46.465 "digest": "sha256", 00:16:46.465 "dhgroup": "ffdhe2048" 00:16:46.465 } 00:16:46.465 } 00:16:46.465 ]' 00:16:46.465 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.724 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.982 12:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.549 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.550 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.550 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.808 00:16:47.808 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.809 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.809 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.067 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.068 { 00:16:48.068 "cntlid": 17, 00:16:48.068 "qid": 0, 00:16:48.068 "state": "enabled", 00:16:48.068 "thread": "nvmf_tgt_poll_group_000", 00:16:48.068 "listen_address": { 00:16:48.068 "trtype": "TCP", 00:16:48.068 "adrfam": "IPv4", 00:16:48.068 "traddr": "10.0.0.2", 00:16:48.068 "trsvcid": "4420" 00:16:48.068 }, 00:16:48.068 "peer_address": { 00:16:48.068 "trtype": "TCP", 00:16:48.068 "adrfam": "IPv4", 00:16:48.068 "traddr": "10.0.0.1", 00:16:48.068 "trsvcid": "37034" 00:16:48.068 }, 00:16:48.068 "auth": { 00:16:48.068 "state": "completed", 00:16:48.068 "digest": "sha256", 00:16:48.068 "dhgroup": "ffdhe3072" 00:16:48.068 } 00:16:48.068 } 00:16:48.068 ]' 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.068 12:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.068 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.068 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.327 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.327 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.327 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.327 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.895 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.154 12:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.412 00:16:49.412 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.412 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.412 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.671 { 00:16:49.671 "cntlid": 19, 00:16:49.671 "qid": 0, 00:16:49.671 "state": "enabled", 00:16:49.671 "thread": "nvmf_tgt_poll_group_000", 00:16:49.671 "listen_address": { 00:16:49.671 "trtype": "TCP", 00:16:49.671 "adrfam": "IPv4", 00:16:49.671 "traddr": "10.0.0.2", 00:16:49.671 "trsvcid": "4420" 00:16:49.671 }, 00:16:49.671 "peer_address": { 00:16:49.671 "trtype": "TCP", 00:16:49.671 "adrfam": "IPv4", 00:16:49.671 "traddr": "10.0.0.1", 00:16:49.671 "trsvcid": "37048" 00:16:49.671 }, 00:16:49.671 "auth": { 00:16:49.671 "state": "completed", 00:16:49.671 "digest": "sha256", 00:16:49.671 "dhgroup": "ffdhe3072" 00:16:49.671 } 00:16:49.671 } 00:16:49.671 ]' 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.671 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.930 12:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.497 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.756 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.015 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.015 { 00:16:51.015 "cntlid": 21, 00:16:51.015 "qid": 0, 00:16:51.015 "state": "enabled", 00:16:51.015 "thread": "nvmf_tgt_poll_group_000", 00:16:51.015 "listen_address": { 00:16:51.015 "trtype": "TCP", 00:16:51.015 "adrfam": "IPv4", 00:16:51.015 "traddr": "10.0.0.2", 00:16:51.015 "trsvcid": "4420" 00:16:51.015 }, 00:16:51.015 "peer_address": { 00:16:51.015 "trtype": "TCP", 00:16:51.015 "adrfam": "IPv4", 00:16:51.015 "traddr": "10.0.0.1", 00:16:51.015 "trsvcid": "37074" 00:16:51.015 }, 00:16:51.015 "auth": { 00:16:51.015 "state": "completed", 00:16:51.015 "digest": "sha256", 00:16:51.015 "dhgroup": "ffdhe3072" 00:16:51.015 } 00:16:51.015 } 00:16:51.015 ]' 00:16:51.015 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.274 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.274 12:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.274 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.274 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.274 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.274 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.274 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.533 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.101 12:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.101 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.360 00:16:52.360 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.360 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.360 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.619 { 00:16:52.619 "cntlid": 23, 00:16:52.619 "qid": 0, 00:16:52.619 "state": "enabled", 00:16:52.619 "thread": "nvmf_tgt_poll_group_000", 00:16:52.619 "listen_address": { 00:16:52.619 "trtype": "TCP", 00:16:52.619 "adrfam": "IPv4", 00:16:52.619 "traddr": "10.0.0.2", 00:16:52.619 "trsvcid": "4420" 00:16:52.619 }, 00:16:52.619 "peer_address": { 00:16:52.619 "trtype": "TCP", 00:16:52.619 "adrfam": "IPv4", 00:16:52.619 "traddr": "10.0.0.1", 00:16:52.619 "trsvcid": "37106" 00:16:52.619 }, 00:16:52.619 "auth": { 00:16:52.619 "state": "completed", 00:16:52.619 "digest": "sha256", 00:16:52.619 "dhgroup": "ffdhe3072" 00:16:52.619 } 00:16:52.619 } 00:16:52.619 ]' 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.619 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.889 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.889 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.889 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.889 12:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:16:53.461 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.462 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.720 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.979 00:16:53.979 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.979 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.979 12:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.238 { 00:16:54.238 "cntlid": 25, 00:16:54.238 "qid": 0, 00:16:54.238 "state": "enabled", 00:16:54.238 "thread": "nvmf_tgt_poll_group_000", 00:16:54.238 "listen_address": { 00:16:54.238 "trtype": "TCP", 00:16:54.238 "adrfam": "IPv4", 00:16:54.238 "traddr": "10.0.0.2", 00:16:54.238 "trsvcid": "4420" 00:16:54.238 }, 00:16:54.238 "peer_address": { 00:16:54.238 "trtype": "TCP", 00:16:54.238 "adrfam": "IPv4", 00:16:54.238 "traddr": "10.0.0.1", 00:16:54.238 "trsvcid": "37124" 00:16:54.238 }, 00:16:54.238 "auth": { 00:16:54.238 "state": "completed", 00:16:54.238 "digest": "sha256", 00:16:54.238 "dhgroup": "ffdhe4096" 00:16:54.238 } 00:16:54.238 } 00:16:54.238 ]' 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.238 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.496 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.064 12:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.323 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.324 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.582 00:16:55.582 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.582 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.582 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.841 { 00:16:55.841 "cntlid": 27, 00:16:55.841 "qid": 0, 00:16:55.841 "state": "enabled", 00:16:55.841 "thread": "nvmf_tgt_poll_group_000", 00:16:55.841 "listen_address": { 00:16:55.841 "trtype": "TCP", 00:16:55.841 "adrfam": "IPv4", 00:16:55.841 "traddr": "10.0.0.2", 00:16:55.841 "trsvcid": "4420" 00:16:55.841 }, 00:16:55.841 "peer_address": { 00:16:55.841 "trtype": "TCP", 00:16:55.841 "adrfam": "IPv4", 00:16:55.841 "traddr": "10.0.0.1", 00:16:55.841 "trsvcid": "37138" 00:16:55.841 }, 00:16:55.841 "auth": { 00:16:55.841 "state": "completed", 00:16:55.841 "digest": "sha256", 00:16:55.841 "dhgroup": "ffdhe4096" 00:16:55.841 } 00:16:55.841 } 00:16:55.841 ]' 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.841 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.102 12:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.668 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.927 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.186 00:16:57.186 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.186 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.186 12:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.445 { 00:16:57.445 "cntlid": 29, 00:16:57.445 "qid": 0, 00:16:57.445 "state": "enabled", 00:16:57.445 "thread": "nvmf_tgt_poll_group_000", 00:16:57.445 "listen_address": { 00:16:57.445 "trtype": "TCP", 00:16:57.445 "adrfam": "IPv4", 00:16:57.445 "traddr": "10.0.0.2", 00:16:57.445 "trsvcid": "4420" 00:16:57.445 }, 00:16:57.445 "peer_address": { 00:16:57.445 "trtype": "TCP", 00:16:57.445 "adrfam": "IPv4", 00:16:57.445 "traddr": "10.0.0.1", 00:16:57.445 "trsvcid": "48346" 00:16:57.445 }, 00:16:57.445 "auth": { 00:16:57.445 "state": "completed", 00:16:57.445 "digest": "sha256", 00:16:57.445 "dhgroup": "ffdhe4096" 00:16:57.445 } 00:16:57.445 } 00:16:57.445 ]' 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.445 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.703 12:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.271 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.530 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.789 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.789 { 00:16:58.789 "cntlid": 31, 00:16:58.789 "qid": 0, 00:16:58.789 "state": "enabled", 00:16:58.789 "thread": "nvmf_tgt_poll_group_000", 00:16:58.789 "listen_address": { 00:16:58.789 "trtype": "TCP", 00:16:58.789 "adrfam": "IPv4", 00:16:58.789 "traddr": "10.0.0.2", 00:16:58.789 "trsvcid": "4420" 00:16:58.789 }, 00:16:58.789 "peer_address": { 00:16:58.789 "trtype": "TCP", 00:16:58.789 "adrfam": "IPv4", 00:16:58.789 "traddr": "10.0.0.1", 00:16:58.789 "trsvcid": "48372" 00:16:58.789 }, 00:16:58.789 "auth": { 00:16:58.789 "state": "completed", 00:16:58.789 "digest": "sha256", 00:16:58.789 "dhgroup": "ffdhe4096" 00:16:58.789 } 00:16:58.789 } 00:16:58.789 ]' 00:16:58.789 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.048 12:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.307 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.876 12:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.445 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.445 { 00:17:00.445 "cntlid": 33, 00:17:00.445 "qid": 0, 00:17:00.445 "state": "enabled", 00:17:00.445 "thread": "nvmf_tgt_poll_group_000", 00:17:00.445 "listen_address": { 00:17:00.445 "trtype": "TCP", 00:17:00.445 "adrfam": "IPv4", 00:17:00.445 "traddr": "10.0.0.2", 00:17:00.445 "trsvcid": "4420" 00:17:00.445 }, 00:17:00.445 "peer_address": { 00:17:00.445 "trtype": "TCP", 00:17:00.445 "adrfam": "IPv4", 00:17:00.445 "traddr": "10.0.0.1", 00:17:00.445 "trsvcid": "48400" 00:17:00.445 }, 00:17:00.445 "auth": { 00:17:00.445 "state": "completed", 00:17:00.445 "digest": "sha256", 00:17:00.445 "dhgroup": "ffdhe6144" 00:17:00.445 } 00:17:00.445 } 00:17:00.445 ]' 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.445 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.713 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.713 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.713 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.713 12:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.281 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.541 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.800 00:17:01.800 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.800 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.800 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.059 { 00:17:02.059 "cntlid": 35, 00:17:02.059 "qid": 0, 00:17:02.059 "state": "enabled", 00:17:02.059 "thread": "nvmf_tgt_poll_group_000", 00:17:02.059 "listen_address": { 00:17:02.059 "trtype": "TCP", 00:17:02.059 "adrfam": "IPv4", 00:17:02.059 "traddr": "10.0.0.2", 00:17:02.059 "trsvcid": "4420" 00:17:02.059 }, 00:17:02.059 "peer_address": { 00:17:02.059 "trtype": "TCP", 00:17:02.059 "adrfam": "IPv4", 00:17:02.059 "traddr": "10.0.0.1", 00:17:02.059 "trsvcid": "48430" 00:17:02.059 }, 00:17:02.059 "auth": { 00:17:02.059 "state": "completed", 00:17:02.059 "digest": "sha256", 00:17:02.059 "dhgroup": "ffdhe6144" 00:17:02.059 } 00:17:02.059 } 00:17:02.059 ]' 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.059 12:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.318 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.318 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.318 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.318 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.884 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.144 12:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.403 00:17:03.403 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.403 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.403 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.661 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.661 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.661 12:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.661 12:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.661 12:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.661 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.661 { 00:17:03.661 "cntlid": 37, 00:17:03.661 "qid": 0, 00:17:03.661 "state": "enabled", 00:17:03.661 "thread": "nvmf_tgt_poll_group_000", 00:17:03.661 "listen_address": { 00:17:03.662 "trtype": "TCP", 00:17:03.662 "adrfam": "IPv4", 00:17:03.662 "traddr": "10.0.0.2", 00:17:03.662 "trsvcid": "4420" 00:17:03.662 }, 00:17:03.662 "peer_address": { 00:17:03.662 "trtype": "TCP", 00:17:03.662 "adrfam": "IPv4", 00:17:03.662 "traddr": "10.0.0.1", 00:17:03.662 "trsvcid": "48452" 00:17:03.662 }, 00:17:03.662 "auth": { 00:17:03.662 "state": "completed", 00:17:03.662 "digest": "sha256", 00:17:03.662 "dhgroup": "ffdhe6144" 00:17:03.662 } 00:17:03.662 } 00:17:03.662 ]' 00:17:03.662 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.662 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.662 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.662 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.662 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.920 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.920 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.920 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.920 12:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.487 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.746 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.005 00:17:05.005 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.005 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.005 12:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.263 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.263 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.263 12:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.263 12:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.263 12:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.263 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.263 { 00:17:05.263 "cntlid": 39, 00:17:05.263 "qid": 0, 00:17:05.263 "state": "enabled", 00:17:05.263 "thread": "nvmf_tgt_poll_group_000", 00:17:05.263 "listen_address": { 00:17:05.263 "trtype": "TCP", 00:17:05.264 "adrfam": "IPv4", 00:17:05.264 "traddr": "10.0.0.2", 00:17:05.264 "trsvcid": "4420" 00:17:05.264 }, 00:17:05.264 "peer_address": { 00:17:05.264 "trtype": "TCP", 00:17:05.264 "adrfam": "IPv4", 00:17:05.264 "traddr": "10.0.0.1", 00:17:05.264 "trsvcid": "48472" 00:17:05.264 }, 00:17:05.264 "auth": { 00:17:05.264 "state": "completed", 00:17:05.264 "digest": "sha256", 00:17:05.264 "dhgroup": "ffdhe6144" 00:17:05.264 } 00:17:05.264 } 00:17:05.264 ]' 00:17:05.264 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.264 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.264 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.264 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.264 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.521 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.521 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.521 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.521 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.127 12:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.385 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.951 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.951 { 00:17:06.951 "cntlid": 41, 00:17:06.951 "qid": 0, 00:17:06.951 "state": "enabled", 00:17:06.951 "thread": "nvmf_tgt_poll_group_000", 00:17:06.951 "listen_address": { 00:17:06.951 "trtype": "TCP", 00:17:06.951 "adrfam": "IPv4", 00:17:06.951 "traddr": "10.0.0.2", 00:17:06.951 "trsvcid": "4420" 00:17:06.951 }, 00:17:06.951 "peer_address": { 00:17:06.951 "trtype": "TCP", 00:17:06.951 "adrfam": "IPv4", 00:17:06.951 "traddr": "10.0.0.1", 00:17:06.951 "trsvcid": "55374" 00:17:06.951 }, 00:17:06.951 "auth": { 00:17:06.951 "state": "completed", 00:17:06.951 "digest": "sha256", 00:17:06.951 "dhgroup": "ffdhe8192" 00:17:06.951 } 00:17:06.951 } 00:17:06.951 ]' 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.951 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.210 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.210 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.210 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.210 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.210 12:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.210 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.778 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.037 12:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.605 00:17:08.605 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.605 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.605 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.864 { 00:17:08.864 "cntlid": 43, 00:17:08.864 "qid": 0, 00:17:08.864 "state": "enabled", 00:17:08.864 "thread": "nvmf_tgt_poll_group_000", 00:17:08.864 "listen_address": { 00:17:08.864 "trtype": "TCP", 00:17:08.864 "adrfam": "IPv4", 00:17:08.864 "traddr": "10.0.0.2", 00:17:08.864 "trsvcid": "4420" 00:17:08.864 }, 00:17:08.864 "peer_address": { 00:17:08.864 "trtype": "TCP", 00:17:08.864 "adrfam": "IPv4", 00:17:08.864 "traddr": "10.0.0.1", 00:17:08.864 "trsvcid": "55412" 00:17:08.864 }, 00:17:08.864 "auth": { 00:17:08.864 "state": "completed", 00:17:08.864 "digest": "sha256", 00:17:08.864 "dhgroup": "ffdhe8192" 00:17:08.864 } 00:17:08.864 } 00:17:08.864 ]' 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.864 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.123 12:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.690 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.950 12:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.518 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.518 { 00:17:10.518 "cntlid": 45, 00:17:10.518 "qid": 0, 00:17:10.518 "state": "enabled", 00:17:10.518 "thread": "nvmf_tgt_poll_group_000", 00:17:10.518 "listen_address": { 00:17:10.518 "trtype": "TCP", 00:17:10.518 "adrfam": "IPv4", 00:17:10.518 "traddr": "10.0.0.2", 00:17:10.518 "trsvcid": "4420" 00:17:10.518 }, 00:17:10.518 "peer_address": { 00:17:10.518 "trtype": "TCP", 00:17:10.518 "adrfam": "IPv4", 00:17:10.518 "traddr": "10.0.0.1", 00:17:10.518 "trsvcid": "55444" 00:17:10.518 }, 00:17:10.518 "auth": { 00:17:10.518 "state": "completed", 00:17:10.518 "digest": "sha256", 00:17:10.518 "dhgroup": "ffdhe8192" 00:17:10.518 } 00:17:10.518 } 00:17:10.518 ]' 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.518 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.777 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.777 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.777 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.777 12:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.355 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.614 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.183 00:17:12.183 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.183 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.183 12:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.183 { 00:17:12.183 "cntlid": 47, 00:17:12.183 "qid": 0, 00:17:12.183 "state": "enabled", 00:17:12.183 "thread": "nvmf_tgt_poll_group_000", 00:17:12.183 "listen_address": { 00:17:12.183 "trtype": "TCP", 00:17:12.183 "adrfam": "IPv4", 00:17:12.183 "traddr": "10.0.0.2", 00:17:12.183 "trsvcid": "4420" 00:17:12.183 }, 00:17:12.183 "peer_address": { 00:17:12.183 "trtype": "TCP", 00:17:12.183 "adrfam": "IPv4", 00:17:12.183 "traddr": "10.0.0.1", 00:17:12.183 "trsvcid": "55468" 00:17:12.183 }, 00:17:12.183 "auth": { 00:17:12.183 "state": "completed", 00:17:12.183 "digest": "sha256", 00:17:12.183 "dhgroup": "ffdhe8192" 00:17:12.183 } 00:17:12.183 } 00:17:12.183 ]' 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.183 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.441 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:13.008 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.008 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.008 12:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.008 12:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.267 12:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.267 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:13.267 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.267 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.267 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.267 12:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.267 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.526 00:17:13.526 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.526 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.526 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.785 { 00:17:13.785 "cntlid": 49, 00:17:13.785 "qid": 0, 00:17:13.785 "state": "enabled", 00:17:13.785 "thread": "nvmf_tgt_poll_group_000", 00:17:13.785 "listen_address": { 00:17:13.785 "trtype": "TCP", 00:17:13.785 "adrfam": "IPv4", 00:17:13.785 "traddr": "10.0.0.2", 00:17:13.785 "trsvcid": "4420" 00:17:13.785 }, 00:17:13.785 "peer_address": { 00:17:13.785 "trtype": "TCP", 00:17:13.785 "adrfam": "IPv4", 00:17:13.785 "traddr": "10.0.0.1", 00:17:13.785 "trsvcid": "55478" 00:17:13.785 }, 00:17:13.785 "auth": { 00:17:13.785 "state": "completed", 00:17:13.785 "digest": "sha384", 00:17:13.785 "dhgroup": "null" 00:17:13.785 } 00:17:13.785 } 00:17:13.785 ]' 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.785 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.044 12:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.610 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.868 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.126 00:17:15.126 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.126 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.126 12:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.384 { 00:17:15.384 "cntlid": 51, 00:17:15.384 "qid": 0, 00:17:15.384 "state": "enabled", 00:17:15.384 "thread": "nvmf_tgt_poll_group_000", 00:17:15.384 "listen_address": { 00:17:15.384 "trtype": "TCP", 00:17:15.384 "adrfam": "IPv4", 00:17:15.384 "traddr": "10.0.0.2", 00:17:15.384 "trsvcid": "4420" 00:17:15.384 }, 00:17:15.384 "peer_address": { 00:17:15.384 "trtype": "TCP", 00:17:15.384 "adrfam": "IPv4", 00:17:15.384 "traddr": "10.0.0.1", 00:17:15.384 "trsvcid": "55518" 00:17:15.384 }, 00:17:15.384 "auth": { 00:17:15.384 "state": "completed", 00:17:15.384 "digest": "sha384", 00:17:15.384 "dhgroup": "null" 00:17:15.384 } 00:17:15.384 } 00:17:15.384 ]' 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.384 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.643 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:16.211 12:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.211 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.470 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.470 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.729 { 00:17:16.729 "cntlid": 53, 00:17:16.729 "qid": 0, 00:17:16.729 "state": "enabled", 00:17:16.729 "thread": "nvmf_tgt_poll_group_000", 00:17:16.729 "listen_address": { 00:17:16.729 "trtype": "TCP", 00:17:16.729 "adrfam": "IPv4", 00:17:16.729 "traddr": "10.0.0.2", 00:17:16.729 "trsvcid": "4420" 00:17:16.729 }, 00:17:16.729 "peer_address": { 00:17:16.729 "trtype": "TCP", 00:17:16.729 "adrfam": "IPv4", 00:17:16.729 "traddr": "10.0.0.1", 00:17:16.729 "trsvcid": "55712" 00:17:16.729 }, 00:17:16.729 "auth": { 00:17:16.729 "state": "completed", 00:17:16.729 "digest": "sha384", 00:17:16.729 "dhgroup": "null" 00:17:16.729 } 00:17:16.729 } 00:17:16.729 ]' 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.729 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.988 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:16.988 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.988 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.988 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.988 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.246 12:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.813 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.072 00:17:18.072 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.072 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.072 12:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.331 { 00:17:18.331 "cntlid": 55, 00:17:18.331 "qid": 0, 00:17:18.331 "state": "enabled", 00:17:18.331 "thread": "nvmf_tgt_poll_group_000", 00:17:18.331 "listen_address": { 00:17:18.331 "trtype": "TCP", 00:17:18.331 "adrfam": "IPv4", 00:17:18.331 "traddr": "10.0.0.2", 00:17:18.331 "trsvcid": "4420" 00:17:18.331 }, 00:17:18.331 "peer_address": { 00:17:18.331 "trtype": "TCP", 00:17:18.331 "adrfam": "IPv4", 00:17:18.331 "traddr": "10.0.0.1", 00:17:18.331 "trsvcid": "55742" 00:17:18.331 }, 00:17:18.331 "auth": { 00:17:18.331 "state": "completed", 00:17:18.331 "digest": "sha384", 00:17:18.331 "dhgroup": "null" 00:17:18.331 } 00:17:18.331 } 00:17:18.331 ]' 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.331 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.631 12:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.199 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.457 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.458 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.458 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.716 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.716 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.716 { 00:17:19.716 "cntlid": 57, 00:17:19.716 "qid": 0, 00:17:19.716 "state": "enabled", 00:17:19.716 "thread": "nvmf_tgt_poll_group_000", 00:17:19.716 "listen_address": { 00:17:19.717 "trtype": "TCP", 00:17:19.717 "adrfam": "IPv4", 00:17:19.717 "traddr": "10.0.0.2", 00:17:19.717 "trsvcid": "4420" 00:17:19.717 }, 00:17:19.717 "peer_address": { 00:17:19.717 "trtype": "TCP", 00:17:19.717 "adrfam": "IPv4", 00:17:19.717 "traddr": "10.0.0.1", 00:17:19.717 "trsvcid": "55760" 00:17:19.717 }, 00:17:19.717 "auth": { 00:17:19.717 "state": "completed", 00:17:19.717 "digest": "sha384", 00:17:19.717 "dhgroup": "ffdhe2048" 00:17:19.717 } 00:17:19.717 } 00:17:19.717 ]' 00:17:19.717 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.975 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.234 12:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.803 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.062 00:17:21.062 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.062 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.062 12:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.320 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.320 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.320 12:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.320 12:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.320 12:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.321 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.321 { 00:17:21.321 "cntlid": 59, 00:17:21.321 "qid": 0, 00:17:21.321 "state": "enabled", 00:17:21.321 "thread": "nvmf_tgt_poll_group_000", 00:17:21.321 "listen_address": { 00:17:21.321 "trtype": "TCP", 00:17:21.321 "adrfam": "IPv4", 00:17:21.321 "traddr": "10.0.0.2", 00:17:21.321 "trsvcid": "4420" 00:17:21.321 }, 00:17:21.321 "peer_address": { 00:17:21.321 "trtype": "TCP", 00:17:21.321 "adrfam": "IPv4", 00:17:21.321 "traddr": "10.0.0.1", 00:17:21.321 "trsvcid": "55770" 00:17:21.321 }, 00:17:21.321 "auth": { 00:17:21.321 "state": "completed", 00:17:21.321 "digest": "sha384", 00:17:21.321 "dhgroup": "ffdhe2048" 00:17:21.321 } 00:17:21.321 } 00:17:21.321 ]' 00:17:21.321 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.321 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.321 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.321 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.321 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.578 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.578 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.579 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.579 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:22.146 12:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.146 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.404 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.661 00:17:22.661 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.661 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.661 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.919 { 00:17:22.919 "cntlid": 61, 00:17:22.919 "qid": 0, 00:17:22.919 "state": "enabled", 00:17:22.919 "thread": "nvmf_tgt_poll_group_000", 00:17:22.919 "listen_address": { 00:17:22.919 "trtype": "TCP", 00:17:22.919 "adrfam": "IPv4", 00:17:22.919 "traddr": "10.0.0.2", 00:17:22.919 "trsvcid": "4420" 00:17:22.919 }, 00:17:22.919 "peer_address": { 00:17:22.919 "trtype": "TCP", 00:17:22.919 "adrfam": "IPv4", 00:17:22.919 "traddr": "10.0.0.1", 00:17:22.919 "trsvcid": "55810" 00:17:22.919 }, 00:17:22.919 "auth": { 00:17:22.919 "state": "completed", 00:17:22.919 "digest": "sha384", 00:17:22.919 "dhgroup": "ffdhe2048" 00:17:22.919 } 00:17:22.919 } 00:17:22.919 ]' 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.919 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.920 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.920 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.920 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.178 12:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.744 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.052 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.052 12:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.311 { 00:17:24.311 "cntlid": 63, 00:17:24.311 "qid": 0, 00:17:24.311 "state": "enabled", 00:17:24.311 "thread": "nvmf_tgt_poll_group_000", 00:17:24.311 "listen_address": { 00:17:24.311 "trtype": "TCP", 00:17:24.311 "adrfam": "IPv4", 00:17:24.311 "traddr": "10.0.0.2", 00:17:24.311 "trsvcid": "4420" 00:17:24.311 }, 00:17:24.311 "peer_address": { 00:17:24.311 "trtype": "TCP", 00:17:24.311 "adrfam": "IPv4", 00:17:24.311 "traddr": "10.0.0.1", 00:17:24.311 "trsvcid": "55830" 00:17:24.311 }, 00:17:24.311 "auth": { 00:17:24.311 "state": "completed", 00:17:24.311 "digest": "sha384", 00:17:24.311 "dhgroup": "ffdhe2048" 00:17:24.311 } 00:17:24.311 } 00:17:24.311 ]' 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.311 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.569 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.569 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.569 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.569 12:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:25.136 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.137 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.395 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.396 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.655 00:17:25.655 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.655 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.655 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.914 { 00:17:25.914 "cntlid": 65, 00:17:25.914 "qid": 0, 00:17:25.914 "state": "enabled", 00:17:25.914 "thread": "nvmf_tgt_poll_group_000", 00:17:25.914 "listen_address": { 00:17:25.914 "trtype": "TCP", 00:17:25.914 "adrfam": "IPv4", 00:17:25.914 "traddr": "10.0.0.2", 00:17:25.914 "trsvcid": "4420" 00:17:25.914 }, 00:17:25.914 "peer_address": { 00:17:25.914 "trtype": "TCP", 00:17:25.914 "adrfam": "IPv4", 00:17:25.914 "traddr": "10.0.0.1", 00:17:25.914 "trsvcid": "55850" 00:17:25.914 }, 00:17:25.914 "auth": { 00:17:25.914 "state": "completed", 00:17:25.914 "digest": "sha384", 00:17:25.914 "dhgroup": "ffdhe3072" 00:17:25.914 } 00:17:25.914 } 00:17:25.914 ]' 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.914 12:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.173 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.742 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.001 12:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.261 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.261 12:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.519 12:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.519 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.519 { 00:17:27.520 "cntlid": 67, 00:17:27.520 "qid": 0, 00:17:27.520 "state": "enabled", 00:17:27.520 "thread": "nvmf_tgt_poll_group_000", 00:17:27.520 "listen_address": { 00:17:27.520 "trtype": "TCP", 00:17:27.520 "adrfam": "IPv4", 00:17:27.520 "traddr": "10.0.0.2", 00:17:27.520 "trsvcid": "4420" 00:17:27.520 }, 00:17:27.520 "peer_address": { 00:17:27.520 "trtype": "TCP", 00:17:27.520 "adrfam": "IPv4", 00:17:27.520 "traddr": "10.0.0.1", 00:17:27.520 "trsvcid": "56690" 00:17:27.520 }, 00:17:27.520 "auth": { 00:17:27.520 "state": "completed", 00:17:27.520 "digest": "sha384", 00:17:27.520 "dhgroup": "ffdhe3072" 00:17:27.520 } 00:17:27.520 } 00:17:27.520 ]' 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.520 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.778 12:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.347 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.606 00:17:28.606 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.606 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.606 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.865 { 00:17:28.865 "cntlid": 69, 00:17:28.865 "qid": 0, 00:17:28.865 "state": "enabled", 00:17:28.865 "thread": "nvmf_tgt_poll_group_000", 00:17:28.865 "listen_address": { 00:17:28.865 "trtype": "TCP", 00:17:28.865 "adrfam": "IPv4", 00:17:28.865 "traddr": "10.0.0.2", 00:17:28.865 "trsvcid": "4420" 00:17:28.865 }, 00:17:28.865 "peer_address": { 00:17:28.865 "trtype": "TCP", 00:17:28.865 "adrfam": "IPv4", 00:17:28.865 "traddr": "10.0.0.1", 00:17:28.865 "trsvcid": "56702" 00:17:28.865 }, 00:17:28.865 "auth": { 00:17:28.865 "state": "completed", 00:17:28.865 "digest": "sha384", 00:17:28.865 "dhgroup": "ffdhe3072" 00:17:28.865 } 00:17:28.865 } 00:17:28.865 ]' 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.865 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.866 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.125 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.125 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.125 12:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.125 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.694 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.953 12:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.212 00:17:30.212 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.212 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.212 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.472 { 00:17:30.472 "cntlid": 71, 00:17:30.472 "qid": 0, 00:17:30.472 "state": "enabled", 00:17:30.472 "thread": "nvmf_tgt_poll_group_000", 00:17:30.472 "listen_address": { 00:17:30.472 "trtype": "TCP", 00:17:30.472 "adrfam": "IPv4", 00:17:30.472 "traddr": "10.0.0.2", 00:17:30.472 "trsvcid": "4420" 00:17:30.472 }, 00:17:30.472 "peer_address": { 00:17:30.472 "trtype": "TCP", 00:17:30.472 "adrfam": "IPv4", 00:17:30.472 "traddr": "10.0.0.1", 00:17:30.472 "trsvcid": "56728" 00:17:30.472 }, 00:17:30.472 "auth": { 00:17:30.472 "state": "completed", 00:17:30.472 "digest": "sha384", 00:17:30.472 "dhgroup": "ffdhe3072" 00:17:30.472 } 00:17:30.472 } 00:17:30.472 ]' 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.472 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.731 12:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.298 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.557 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.815 00:17:31.816 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.816 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.816 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.074 { 00:17:32.074 "cntlid": 73, 00:17:32.074 "qid": 0, 00:17:32.074 "state": "enabled", 00:17:32.074 "thread": "nvmf_tgt_poll_group_000", 00:17:32.074 "listen_address": { 00:17:32.074 "trtype": "TCP", 00:17:32.074 "adrfam": "IPv4", 00:17:32.074 "traddr": "10.0.0.2", 00:17:32.074 "trsvcid": "4420" 00:17:32.074 }, 00:17:32.074 "peer_address": { 00:17:32.074 "trtype": "TCP", 00:17:32.074 "adrfam": "IPv4", 00:17:32.074 "traddr": "10.0.0.1", 00:17:32.074 "trsvcid": "56738" 00:17:32.074 }, 00:17:32.074 "auth": { 00:17:32.074 "state": "completed", 00:17:32.074 "digest": "sha384", 00:17:32.074 "dhgroup": "ffdhe4096" 00:17:32.074 } 00:17:32.074 } 00:17:32.074 ]' 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.074 12:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.333 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:32.901 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.901 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.901 12:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.902 12:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.160 00:17:33.160 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.160 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.160 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.418 { 00:17:33.418 "cntlid": 75, 00:17:33.418 "qid": 0, 00:17:33.418 "state": "enabled", 00:17:33.418 "thread": "nvmf_tgt_poll_group_000", 00:17:33.418 "listen_address": { 00:17:33.418 "trtype": "TCP", 00:17:33.418 "adrfam": "IPv4", 00:17:33.418 "traddr": "10.0.0.2", 00:17:33.418 "trsvcid": "4420" 00:17:33.418 }, 00:17:33.418 "peer_address": { 00:17:33.418 "trtype": "TCP", 00:17:33.418 "adrfam": "IPv4", 00:17:33.418 "traddr": "10.0.0.1", 00:17:33.418 "trsvcid": "56758" 00:17:33.418 }, 00:17:33.418 "auth": { 00:17:33.418 "state": "completed", 00:17:33.418 "digest": "sha384", 00:17:33.418 "dhgroup": "ffdhe4096" 00:17:33.418 } 00:17:33.418 } 00:17:33.418 ]' 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.418 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.677 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.677 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.677 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.677 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.677 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.677 12:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:34.245 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.245 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.245 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.245 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.504 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.788 00:17:34.788 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.788 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.788 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.078 { 00:17:35.078 "cntlid": 77, 00:17:35.078 "qid": 0, 00:17:35.078 "state": "enabled", 00:17:35.078 "thread": "nvmf_tgt_poll_group_000", 00:17:35.078 "listen_address": { 00:17:35.078 "trtype": "TCP", 00:17:35.078 "adrfam": "IPv4", 00:17:35.078 "traddr": "10.0.0.2", 00:17:35.078 "trsvcid": "4420" 00:17:35.078 }, 00:17:35.078 "peer_address": { 00:17:35.078 "trtype": "TCP", 00:17:35.078 "adrfam": "IPv4", 00:17:35.078 "traddr": "10.0.0.1", 00:17:35.078 "trsvcid": "56798" 00:17:35.078 }, 00:17:35.078 "auth": { 00:17:35.078 "state": "completed", 00:17:35.078 "digest": "sha384", 00:17:35.078 "dhgroup": "ffdhe4096" 00:17:35.078 } 00:17:35.078 } 00:17:35.078 ]' 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.078 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.079 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.079 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.079 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.079 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.079 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.079 12:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.337 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.906 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.165 12:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.424 00:17:36.424 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.424 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.424 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.683 { 00:17:36.683 "cntlid": 79, 00:17:36.683 "qid": 0, 00:17:36.683 "state": "enabled", 00:17:36.683 "thread": "nvmf_tgt_poll_group_000", 00:17:36.683 "listen_address": { 00:17:36.683 "trtype": "TCP", 00:17:36.683 "adrfam": "IPv4", 00:17:36.683 "traddr": "10.0.0.2", 00:17:36.683 "trsvcid": "4420" 00:17:36.683 }, 00:17:36.683 "peer_address": { 00:17:36.683 "trtype": "TCP", 00:17:36.683 "adrfam": "IPv4", 00:17:36.683 "traddr": "10.0.0.1", 00:17:36.683 "trsvcid": "52138" 00:17:36.683 }, 00:17:36.683 "auth": { 00:17:36.683 "state": "completed", 00:17:36.683 "digest": "sha384", 00:17:36.683 "dhgroup": "ffdhe4096" 00:17:36.683 } 00:17:36.683 } 00:17:36.683 ]' 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.683 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.942 12:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.511 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.771 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.030 00:17:38.030 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.030 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.030 12:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.289 { 00:17:38.289 "cntlid": 81, 00:17:38.289 "qid": 0, 00:17:38.289 "state": "enabled", 00:17:38.289 "thread": "nvmf_tgt_poll_group_000", 00:17:38.289 "listen_address": { 00:17:38.289 "trtype": "TCP", 00:17:38.289 "adrfam": "IPv4", 00:17:38.289 "traddr": "10.0.0.2", 00:17:38.289 "trsvcid": "4420" 00:17:38.289 }, 00:17:38.289 "peer_address": { 00:17:38.289 "trtype": "TCP", 00:17:38.289 "adrfam": "IPv4", 00:17:38.289 "traddr": "10.0.0.1", 00:17:38.289 "trsvcid": "52158" 00:17:38.289 }, 00:17:38.289 "auth": { 00:17:38.289 "state": "completed", 00:17:38.289 "digest": "sha384", 00:17:38.289 "dhgroup": "ffdhe6144" 00:17:38.289 } 00:17:38.289 } 00:17:38.289 ]' 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.289 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.548 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.142 12:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.402 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.661 00:17:39.661 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.661 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.661 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.920 { 00:17:39.920 "cntlid": 83, 00:17:39.920 "qid": 0, 00:17:39.920 "state": "enabled", 00:17:39.920 "thread": "nvmf_tgt_poll_group_000", 00:17:39.920 "listen_address": { 00:17:39.920 "trtype": "TCP", 00:17:39.920 "adrfam": "IPv4", 00:17:39.920 "traddr": "10.0.0.2", 00:17:39.920 "trsvcid": "4420" 00:17:39.920 }, 00:17:39.920 "peer_address": { 00:17:39.920 "trtype": "TCP", 00:17:39.920 "adrfam": "IPv4", 00:17:39.920 "traddr": "10.0.0.1", 00:17:39.920 "trsvcid": "52186" 00:17:39.920 }, 00:17:39.920 "auth": { 00:17:39.920 "state": "completed", 00:17:39.920 "digest": "sha384", 00:17:39.920 "dhgroup": "ffdhe6144" 00:17:39.920 } 00:17:39.920 } 00:17:39.920 ]' 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.920 12:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.179 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.747 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.006 12:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.264 00:17:41.264 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.264 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.264 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.522 { 00:17:41.522 "cntlid": 85, 00:17:41.522 "qid": 0, 00:17:41.522 "state": "enabled", 00:17:41.522 "thread": "nvmf_tgt_poll_group_000", 00:17:41.522 "listen_address": { 00:17:41.522 "trtype": "TCP", 00:17:41.522 "adrfam": "IPv4", 00:17:41.522 "traddr": "10.0.0.2", 00:17:41.522 "trsvcid": "4420" 00:17:41.522 }, 00:17:41.522 "peer_address": { 00:17:41.522 "trtype": "TCP", 00:17:41.522 "adrfam": "IPv4", 00:17:41.522 "traddr": "10.0.0.1", 00:17:41.522 "trsvcid": "52218" 00:17:41.522 }, 00:17:41.522 "auth": { 00:17:41.522 "state": "completed", 00:17:41.522 "digest": "sha384", 00:17:41.522 "dhgroup": "ffdhe6144" 00:17:41.522 } 00:17:41.522 } 00:17:41.522 ]' 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.522 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.781 12:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.347 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.605 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.863 00:17:42.863 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.863 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.863 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.121 { 00:17:43.121 "cntlid": 87, 00:17:43.121 "qid": 0, 00:17:43.121 "state": "enabled", 00:17:43.121 "thread": "nvmf_tgt_poll_group_000", 00:17:43.121 "listen_address": { 00:17:43.121 "trtype": "TCP", 00:17:43.121 "adrfam": "IPv4", 00:17:43.121 "traddr": "10.0.0.2", 00:17:43.121 "trsvcid": "4420" 00:17:43.121 }, 00:17:43.121 "peer_address": { 00:17:43.121 "trtype": "TCP", 00:17:43.121 "adrfam": "IPv4", 00:17:43.121 "traddr": "10.0.0.1", 00:17:43.121 "trsvcid": "52244" 00:17:43.121 }, 00:17:43.121 "auth": { 00:17:43.121 "state": "completed", 00:17:43.121 "digest": "sha384", 00:17:43.121 "dhgroup": "ffdhe6144" 00:17:43.121 } 00:17:43.121 } 00:17:43.121 ]' 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.121 12:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.121 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.121 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.121 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.379 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.943 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.201 12:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.766 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.766 { 00:17:44.766 "cntlid": 89, 00:17:44.766 "qid": 0, 00:17:44.766 "state": "enabled", 00:17:44.766 "thread": "nvmf_tgt_poll_group_000", 00:17:44.766 "listen_address": { 00:17:44.766 "trtype": "TCP", 00:17:44.766 "adrfam": "IPv4", 00:17:44.766 "traddr": "10.0.0.2", 00:17:44.766 "trsvcid": "4420" 00:17:44.766 }, 00:17:44.766 "peer_address": { 00:17:44.766 "trtype": "TCP", 00:17:44.766 "adrfam": "IPv4", 00:17:44.766 "traddr": "10.0.0.1", 00:17:44.766 "trsvcid": "52266" 00:17:44.766 }, 00:17:44.766 "auth": { 00:17:44.766 "state": "completed", 00:17:44.766 "digest": "sha384", 00:17:44.766 "dhgroup": "ffdhe8192" 00:17:44.766 } 00:17:44.766 } 00:17:44.766 ]' 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.766 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.024 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.024 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.024 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.024 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.024 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.024 12:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.588 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.845 12:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.846 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.846 12:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.411 00:17:46.411 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.411 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.411 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.669 { 00:17:46.669 "cntlid": 91, 00:17:46.669 "qid": 0, 00:17:46.669 "state": "enabled", 00:17:46.669 "thread": "nvmf_tgt_poll_group_000", 00:17:46.669 "listen_address": { 00:17:46.669 "trtype": "TCP", 00:17:46.669 "adrfam": "IPv4", 00:17:46.669 "traddr": "10.0.0.2", 00:17:46.669 "trsvcid": "4420" 00:17:46.669 }, 00:17:46.669 "peer_address": { 00:17:46.669 "trtype": "TCP", 00:17:46.669 "adrfam": "IPv4", 00:17:46.669 "traddr": "10.0.0.1", 00:17:46.669 "trsvcid": "34976" 00:17:46.669 }, 00:17:46.669 "auth": { 00:17:46.669 "state": "completed", 00:17:46.669 "digest": "sha384", 00:17:46.669 "dhgroup": "ffdhe8192" 00:17:46.669 } 00:17:46.669 } 00:17:46.669 ]' 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.669 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.928 12:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.496 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.755 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.014 00:17:48.273 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.273 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.273 12:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.273 { 00:17:48.273 "cntlid": 93, 00:17:48.273 "qid": 0, 00:17:48.273 "state": "enabled", 00:17:48.273 "thread": "nvmf_tgt_poll_group_000", 00:17:48.273 "listen_address": { 00:17:48.273 "trtype": "TCP", 00:17:48.273 "adrfam": "IPv4", 00:17:48.273 "traddr": "10.0.0.2", 00:17:48.273 "trsvcid": "4420" 00:17:48.273 }, 00:17:48.273 "peer_address": { 00:17:48.273 "trtype": "TCP", 00:17:48.273 "adrfam": "IPv4", 00:17:48.273 "traddr": "10.0.0.1", 00:17:48.273 "trsvcid": "35008" 00:17:48.273 }, 00:17:48.273 "auth": { 00:17:48.273 "state": "completed", 00:17:48.273 "digest": "sha384", 00:17:48.273 "dhgroup": "ffdhe8192" 00:17:48.273 } 00:17:48.273 } 00:17:48.273 ]' 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.273 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.532 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.532 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.532 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.532 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.532 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.532 12:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:49.107 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.107 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.107 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.107 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.365 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.931 00:17:49.931 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.931 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.931 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.190 { 00:17:50.190 "cntlid": 95, 00:17:50.190 "qid": 0, 00:17:50.190 "state": "enabled", 00:17:50.190 "thread": "nvmf_tgt_poll_group_000", 00:17:50.190 "listen_address": { 00:17:50.190 "trtype": "TCP", 00:17:50.190 "adrfam": "IPv4", 00:17:50.190 "traddr": "10.0.0.2", 00:17:50.190 "trsvcid": "4420" 00:17:50.190 }, 00:17:50.190 "peer_address": { 00:17:50.190 "trtype": "TCP", 00:17:50.190 "adrfam": "IPv4", 00:17:50.190 "traddr": "10.0.0.1", 00:17:50.190 "trsvcid": "35028" 00:17:50.190 }, 00:17:50.190 "auth": { 00:17:50.190 "state": "completed", 00:17:50.190 "digest": "sha384", 00:17:50.190 "dhgroup": "ffdhe8192" 00:17:50.190 } 00:17:50.190 } 00:17:50.190 ]' 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.190 12:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.190 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.190 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.190 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.190 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.190 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.449 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.017 12:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.313 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.570 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.570 { 00:17:51.570 "cntlid": 97, 00:17:51.570 "qid": 0, 00:17:51.570 "state": "enabled", 00:17:51.570 "thread": "nvmf_tgt_poll_group_000", 00:17:51.570 "listen_address": { 00:17:51.570 "trtype": "TCP", 00:17:51.570 "adrfam": "IPv4", 00:17:51.570 "traddr": "10.0.0.2", 00:17:51.570 "trsvcid": "4420" 00:17:51.570 }, 00:17:51.570 "peer_address": { 00:17:51.570 "trtype": "TCP", 00:17:51.570 "adrfam": "IPv4", 00:17:51.570 "traddr": "10.0.0.1", 00:17:51.570 "trsvcid": "35050" 00:17:51.570 }, 00:17:51.570 "auth": { 00:17:51.570 "state": "completed", 00:17:51.570 "digest": "sha512", 00:17:51.570 "dhgroup": "null" 00:17:51.570 } 00:17:51.570 } 00:17:51.570 ]' 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.570 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.829 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.829 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.829 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.829 12:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.396 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.655 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.912 00:17:52.912 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.912 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.912 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.170 { 00:17:53.170 "cntlid": 99, 00:17:53.170 "qid": 0, 00:17:53.170 "state": "enabled", 00:17:53.170 "thread": "nvmf_tgt_poll_group_000", 00:17:53.170 "listen_address": { 00:17:53.170 "trtype": "TCP", 00:17:53.170 "adrfam": "IPv4", 00:17:53.170 "traddr": "10.0.0.2", 00:17:53.170 "trsvcid": "4420" 00:17:53.170 }, 00:17:53.170 "peer_address": { 00:17:53.170 "trtype": "TCP", 00:17:53.170 "adrfam": "IPv4", 00:17:53.170 "traddr": "10.0.0.1", 00:17:53.170 "trsvcid": "35080" 00:17:53.170 }, 00:17:53.170 "auth": { 00:17:53.170 "state": "completed", 00:17:53.170 "digest": "sha512", 00:17:53.170 "dhgroup": "null" 00:17:53.170 } 00:17:53.170 } 00:17:53.170 ]' 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.170 12:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.170 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:53.170 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.170 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.170 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.170 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.426 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.994 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.252 12:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.252 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.252 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.252 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.511 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.511 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.511 { 00:17:54.511 "cntlid": 101, 00:17:54.511 "qid": 0, 00:17:54.511 "state": "enabled", 00:17:54.511 "thread": "nvmf_tgt_poll_group_000", 00:17:54.512 "listen_address": { 00:17:54.512 "trtype": "TCP", 00:17:54.512 "adrfam": "IPv4", 00:17:54.512 "traddr": "10.0.0.2", 00:17:54.512 "trsvcid": "4420" 00:17:54.512 }, 00:17:54.512 "peer_address": { 00:17:54.512 "trtype": "TCP", 00:17:54.512 "adrfam": "IPv4", 00:17:54.512 "traddr": "10.0.0.1", 00:17:54.512 "trsvcid": "35094" 00:17:54.512 }, 00:17:54.512 "auth": { 00:17:54.512 "state": "completed", 00:17:54.512 "digest": "sha512", 00:17:54.512 "dhgroup": "null" 00:17:54.512 } 00:17:54.512 } 00:17:54.512 ]' 00:17:54.512 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.770 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.028 12:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.611 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.867 00:17:55.867 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.867 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.867 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.124 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.124 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.124 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.125 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.125 12:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.125 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.125 { 00:17:56.125 "cntlid": 103, 00:17:56.125 "qid": 0, 00:17:56.125 "state": "enabled", 00:17:56.125 "thread": "nvmf_tgt_poll_group_000", 00:17:56.125 "listen_address": { 00:17:56.125 "trtype": "TCP", 00:17:56.125 "adrfam": "IPv4", 00:17:56.125 "traddr": "10.0.0.2", 00:17:56.125 "trsvcid": "4420" 00:17:56.125 }, 00:17:56.125 "peer_address": { 00:17:56.125 "trtype": "TCP", 00:17:56.125 "adrfam": "IPv4", 00:17:56.125 "traddr": "10.0.0.1", 00:17:56.125 "trsvcid": "35124" 00:17:56.125 }, 00:17:56.125 "auth": { 00:17:56.125 "state": "completed", 00:17:56.125 "digest": "sha512", 00:17:56.125 "dhgroup": "null" 00:17:56.125 } 00:17:56.125 } 00:17:56.125 ]' 00:17:56.125 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.125 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.125 12:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.125 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.125 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.125 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.125 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.125 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.382 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.948 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.205 12:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.463 00:17:57.463 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.463 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.463 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.721 { 00:17:57.721 "cntlid": 105, 00:17:57.721 "qid": 0, 00:17:57.721 "state": "enabled", 00:17:57.721 "thread": "nvmf_tgt_poll_group_000", 00:17:57.721 "listen_address": { 00:17:57.721 "trtype": "TCP", 00:17:57.721 "adrfam": "IPv4", 00:17:57.721 "traddr": "10.0.0.2", 00:17:57.721 "trsvcid": "4420" 00:17:57.721 }, 00:17:57.721 "peer_address": { 00:17:57.721 "trtype": "TCP", 00:17:57.721 "adrfam": "IPv4", 00:17:57.721 "traddr": "10.0.0.1", 00:17:57.721 "trsvcid": "59012" 00:17:57.721 }, 00:17:57.721 "auth": { 00:17:57.721 "state": "completed", 00:17:57.721 "digest": "sha512", 00:17:57.721 "dhgroup": "ffdhe2048" 00:17:57.721 } 00:17:57.721 } 00:17:57.721 ]' 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.721 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.978 12:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.544 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.802 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.803 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.803 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.803 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.803 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.060 00:17:59.060 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.060 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.060 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.060 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.061 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.061 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.061 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.061 12:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.061 12:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.061 { 00:17:59.061 "cntlid": 107, 00:17:59.061 "qid": 0, 00:17:59.061 "state": "enabled", 00:17:59.061 "thread": "nvmf_tgt_poll_group_000", 00:17:59.061 "listen_address": { 00:17:59.061 "trtype": "TCP", 00:17:59.061 "adrfam": "IPv4", 00:17:59.061 "traddr": "10.0.0.2", 00:17:59.061 "trsvcid": "4420" 00:17:59.061 }, 00:17:59.061 "peer_address": { 00:17:59.061 "trtype": "TCP", 00:17:59.061 "adrfam": "IPv4", 00:17:59.061 "traddr": "10.0.0.1", 00:17:59.061 "trsvcid": "59052" 00:17:59.061 }, 00:17:59.061 "auth": { 00:17:59.061 "state": "completed", 00:17:59.061 "digest": "sha512", 00:17:59.061 "dhgroup": "ffdhe2048" 00:17:59.061 } 00:17:59.061 } 00:17:59.061 ]' 00:17:59.061 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.318 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.575 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.140 12:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.140 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.397 00:18:00.397 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.397 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.397 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.655 { 00:18:00.655 "cntlid": 109, 00:18:00.655 "qid": 0, 00:18:00.655 "state": "enabled", 00:18:00.655 "thread": "nvmf_tgt_poll_group_000", 00:18:00.655 "listen_address": { 00:18:00.655 "trtype": "TCP", 00:18:00.655 "adrfam": "IPv4", 00:18:00.655 "traddr": "10.0.0.2", 00:18:00.655 "trsvcid": "4420" 00:18:00.655 }, 00:18:00.655 "peer_address": { 00:18:00.655 "trtype": "TCP", 00:18:00.655 "adrfam": "IPv4", 00:18:00.655 "traddr": "10.0.0.1", 00:18:00.655 "trsvcid": "59084" 00:18:00.655 }, 00:18:00.655 "auth": { 00:18:00.655 "state": "completed", 00:18:00.655 "digest": "sha512", 00:18:00.655 "dhgroup": "ffdhe2048" 00:18:00.655 } 00:18:00.655 } 00:18:00.655 ]' 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.655 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.914 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.914 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.914 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.914 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.914 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.914 12:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.490 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.750 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.009 00:18:02.009 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.009 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.009 12:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.268 { 00:18:02.268 "cntlid": 111, 00:18:02.268 "qid": 0, 00:18:02.268 "state": "enabled", 00:18:02.268 "thread": "nvmf_tgt_poll_group_000", 00:18:02.268 "listen_address": { 00:18:02.268 "trtype": "TCP", 00:18:02.268 "adrfam": "IPv4", 00:18:02.268 "traddr": "10.0.0.2", 00:18:02.268 "trsvcid": "4420" 00:18:02.268 }, 00:18:02.268 "peer_address": { 00:18:02.268 "trtype": "TCP", 00:18:02.268 "adrfam": "IPv4", 00:18:02.268 "traddr": "10.0.0.1", 00:18:02.268 "trsvcid": "59106" 00:18:02.268 }, 00:18:02.268 "auth": { 00:18:02.268 "state": "completed", 00:18:02.268 "digest": "sha512", 00:18:02.268 "dhgroup": "ffdhe2048" 00:18:02.268 } 00:18:02.268 } 00:18:02.268 ]' 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.268 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.526 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.092 12:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.351 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.610 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.610 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.868 12:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.868 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.868 { 00:18:03.868 "cntlid": 113, 00:18:03.868 "qid": 0, 00:18:03.868 "state": "enabled", 00:18:03.868 "thread": "nvmf_tgt_poll_group_000", 00:18:03.868 "listen_address": { 00:18:03.868 "trtype": "TCP", 00:18:03.868 "adrfam": "IPv4", 00:18:03.868 "traddr": "10.0.0.2", 00:18:03.868 "trsvcid": "4420" 00:18:03.868 }, 00:18:03.868 "peer_address": { 00:18:03.868 "trtype": "TCP", 00:18:03.868 "adrfam": "IPv4", 00:18:03.868 "traddr": "10.0.0.1", 00:18:03.868 "trsvcid": "59120" 00:18:03.868 }, 00:18:03.868 "auth": { 00:18:03.868 "state": "completed", 00:18:03.868 "digest": "sha512", 00:18:03.868 "dhgroup": "ffdhe3072" 00:18:03.868 } 00:18:03.868 } 00:18:03.868 ]' 00:18:03.868 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.868 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.868 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.868 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.869 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.869 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.869 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.869 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.128 12:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.697 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.956 00:18:04.956 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.956 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.956 12:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.216 { 00:18:05.216 "cntlid": 115, 00:18:05.216 "qid": 0, 00:18:05.216 "state": "enabled", 00:18:05.216 "thread": "nvmf_tgt_poll_group_000", 00:18:05.216 "listen_address": { 00:18:05.216 "trtype": "TCP", 00:18:05.216 "adrfam": "IPv4", 00:18:05.216 "traddr": "10.0.0.2", 00:18:05.216 "trsvcid": "4420" 00:18:05.216 }, 00:18:05.216 "peer_address": { 00:18:05.216 "trtype": "TCP", 00:18:05.216 "adrfam": "IPv4", 00:18:05.216 "traddr": "10.0.0.1", 00:18:05.216 "trsvcid": "59144" 00:18:05.216 }, 00:18:05.216 "auth": { 00:18:05.216 "state": "completed", 00:18:05.216 "digest": "sha512", 00:18:05.216 "dhgroup": "ffdhe3072" 00:18:05.216 } 00:18:05.216 } 00:18:05.216 ]' 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.216 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.475 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.475 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.475 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.475 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.043 12:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.302 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.561 00:18:06.561 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.561 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.561 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.820 { 00:18:06.820 "cntlid": 117, 00:18:06.820 "qid": 0, 00:18:06.820 "state": "enabled", 00:18:06.820 "thread": "nvmf_tgt_poll_group_000", 00:18:06.820 "listen_address": { 00:18:06.820 "trtype": "TCP", 00:18:06.820 "adrfam": "IPv4", 00:18:06.820 "traddr": "10.0.0.2", 00:18:06.820 "trsvcid": "4420" 00:18:06.820 }, 00:18:06.820 "peer_address": { 00:18:06.820 "trtype": "TCP", 00:18:06.820 "adrfam": "IPv4", 00:18:06.820 "traddr": "10.0.0.1", 00:18:06.820 "trsvcid": "35540" 00:18:06.820 }, 00:18:06.820 "auth": { 00:18:06.820 "state": "completed", 00:18:06.820 "digest": "sha512", 00:18:06.820 "dhgroup": "ffdhe3072" 00:18:06.820 } 00:18:06.820 } 00:18:06.820 ]' 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.820 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.079 12:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.730 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.989 12:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.989 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.989 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.989 00:18:07.989 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.989 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.989 12:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.248 { 00:18:08.248 "cntlid": 119, 00:18:08.248 "qid": 0, 00:18:08.248 "state": "enabled", 00:18:08.248 "thread": "nvmf_tgt_poll_group_000", 00:18:08.248 "listen_address": { 00:18:08.248 "trtype": "TCP", 00:18:08.248 "adrfam": "IPv4", 00:18:08.248 "traddr": "10.0.0.2", 00:18:08.248 "trsvcid": "4420" 00:18:08.248 }, 00:18:08.248 "peer_address": { 00:18:08.248 "trtype": "TCP", 00:18:08.248 "adrfam": "IPv4", 00:18:08.248 "traddr": "10.0.0.1", 00:18:08.248 "trsvcid": "35558" 00:18:08.248 }, 00:18:08.248 "auth": { 00:18:08.248 "state": "completed", 00:18:08.248 "digest": "sha512", 00:18:08.248 "dhgroup": "ffdhe3072" 00:18:08.248 } 00:18:08.248 } 00:18:08.248 ]' 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.248 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.507 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.507 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.507 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.507 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.507 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.075 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.076 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.076 12:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.334 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.592 00:18:09.592 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.592 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.592 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.851 { 00:18:09.851 "cntlid": 121, 00:18:09.851 "qid": 0, 00:18:09.851 "state": "enabled", 00:18:09.851 "thread": "nvmf_tgt_poll_group_000", 00:18:09.851 "listen_address": { 00:18:09.851 "trtype": "TCP", 00:18:09.851 "adrfam": "IPv4", 00:18:09.851 "traddr": "10.0.0.2", 00:18:09.851 "trsvcid": "4420" 00:18:09.851 }, 00:18:09.851 "peer_address": { 00:18:09.851 "trtype": "TCP", 00:18:09.851 "adrfam": "IPv4", 00:18:09.851 "traddr": "10.0.0.1", 00:18:09.851 "trsvcid": "35586" 00:18:09.851 }, 00:18:09.851 "auth": { 00:18:09.851 "state": "completed", 00:18:09.851 "digest": "sha512", 00:18:09.851 "dhgroup": "ffdhe4096" 00:18:09.851 } 00:18:09.851 } 00:18:09.851 ]' 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.851 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.110 12:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.678 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.937 12:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.196 00:18:11.196 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.196 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.196 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.456 { 00:18:11.456 "cntlid": 123, 00:18:11.456 "qid": 0, 00:18:11.456 "state": "enabled", 00:18:11.456 "thread": "nvmf_tgt_poll_group_000", 00:18:11.456 "listen_address": { 00:18:11.456 "trtype": "TCP", 00:18:11.456 "adrfam": "IPv4", 00:18:11.456 "traddr": "10.0.0.2", 00:18:11.456 "trsvcid": "4420" 00:18:11.456 }, 00:18:11.456 "peer_address": { 00:18:11.456 "trtype": "TCP", 00:18:11.456 "adrfam": "IPv4", 00:18:11.456 "traddr": "10.0.0.1", 00:18:11.456 "trsvcid": "35616" 00:18:11.456 }, 00:18:11.456 "auth": { 00:18:11.456 "state": "completed", 00:18:11.456 "digest": "sha512", 00:18:11.456 "dhgroup": "ffdhe4096" 00:18:11.456 } 00:18:11.456 } 00:18:11.456 ]' 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.456 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.715 12:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.283 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.542 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.801 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.801 { 00:18:12.801 "cntlid": 125, 00:18:12.801 "qid": 0, 00:18:12.801 "state": "enabled", 00:18:12.801 "thread": "nvmf_tgt_poll_group_000", 00:18:12.801 "listen_address": { 00:18:12.801 "trtype": "TCP", 00:18:12.801 "adrfam": "IPv4", 00:18:12.801 "traddr": "10.0.0.2", 00:18:12.801 "trsvcid": "4420" 00:18:12.801 }, 00:18:12.801 "peer_address": { 00:18:12.801 "trtype": "TCP", 00:18:12.801 "adrfam": "IPv4", 00:18:12.801 "traddr": "10.0.0.1", 00:18:12.801 "trsvcid": "35632" 00:18:12.801 }, 00:18:12.801 "auth": { 00:18:12.801 "state": "completed", 00:18:12.801 "digest": "sha512", 00:18:12.801 "dhgroup": "ffdhe4096" 00:18:12.801 } 00:18:12.801 } 00:18:12.801 ]' 00:18:12.801 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.059 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.060 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.060 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.060 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.060 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.060 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.060 12:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.318 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.886 12:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.146 00:18:14.146 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.146 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.146 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.405 { 00:18:14.405 "cntlid": 127, 00:18:14.405 "qid": 0, 00:18:14.405 "state": "enabled", 00:18:14.405 "thread": "nvmf_tgt_poll_group_000", 00:18:14.405 "listen_address": { 00:18:14.405 "trtype": "TCP", 00:18:14.405 "adrfam": "IPv4", 00:18:14.405 "traddr": "10.0.0.2", 00:18:14.405 "trsvcid": "4420" 00:18:14.405 }, 00:18:14.405 "peer_address": { 00:18:14.405 "trtype": "TCP", 00:18:14.405 "adrfam": "IPv4", 00:18:14.405 "traddr": "10.0.0.1", 00:18:14.405 "trsvcid": "35652" 00:18:14.405 }, 00:18:14.405 "auth": { 00:18:14.405 "state": "completed", 00:18:14.405 "digest": "sha512", 00:18:14.405 "dhgroup": "ffdhe4096" 00:18:14.405 } 00:18:14.405 } 00:18:14.405 ]' 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.405 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.664 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.664 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.664 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.664 12:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:18:15.233 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.233 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.233 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.492 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.492 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.492 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.492 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.492 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.493 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.062 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.062 { 00:18:16.062 "cntlid": 129, 00:18:16.062 "qid": 0, 00:18:16.062 "state": "enabled", 00:18:16.062 "thread": "nvmf_tgt_poll_group_000", 00:18:16.062 "listen_address": { 00:18:16.062 "trtype": "TCP", 00:18:16.062 "adrfam": "IPv4", 00:18:16.062 "traddr": "10.0.0.2", 00:18:16.062 "trsvcid": "4420" 00:18:16.062 }, 00:18:16.062 "peer_address": { 00:18:16.062 "trtype": "TCP", 00:18:16.062 "adrfam": "IPv4", 00:18:16.062 "traddr": "10.0.0.1", 00:18:16.062 "trsvcid": "35664" 00:18:16.062 }, 00:18:16.062 "auth": { 00:18:16.062 "state": "completed", 00:18:16.062 "digest": "sha512", 00:18:16.062 "dhgroup": "ffdhe6144" 00:18:16.062 } 00:18:16.062 } 00:18:16.062 ]' 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.062 12:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.062 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.062 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.321 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.321 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.321 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.321 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.888 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.147 12:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.147 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.147 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.147 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.405 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.665 { 00:18:17.665 "cntlid": 131, 00:18:17.665 "qid": 0, 00:18:17.665 "state": "enabled", 00:18:17.665 "thread": "nvmf_tgt_poll_group_000", 00:18:17.665 "listen_address": { 00:18:17.665 "trtype": "TCP", 00:18:17.665 "adrfam": "IPv4", 00:18:17.665 "traddr": "10.0.0.2", 00:18:17.665 "trsvcid": "4420" 00:18:17.665 }, 00:18:17.665 "peer_address": { 00:18:17.665 "trtype": "TCP", 00:18:17.665 "adrfam": "IPv4", 00:18:17.665 "traddr": "10.0.0.1", 00:18:17.665 "trsvcid": "58016" 00:18:17.665 }, 00:18:17.665 "auth": { 00:18:17.665 "state": "completed", 00:18:17.665 "digest": "sha512", 00:18:17.665 "dhgroup": "ffdhe6144" 00:18:17.665 } 00:18:17.665 } 00:18:17.665 ]' 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.665 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.925 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.925 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.925 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.925 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.925 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.925 12:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:18:18.493 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.493 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.493 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.494 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.494 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.494 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.494 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.494 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.753 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.011 00:18:19.012 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.012 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.012 12:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.269 { 00:18:19.269 "cntlid": 133, 00:18:19.269 "qid": 0, 00:18:19.269 "state": "enabled", 00:18:19.269 "thread": "nvmf_tgt_poll_group_000", 00:18:19.269 "listen_address": { 00:18:19.269 "trtype": "TCP", 00:18:19.269 "adrfam": "IPv4", 00:18:19.269 "traddr": "10.0.0.2", 00:18:19.269 "trsvcid": "4420" 00:18:19.269 }, 00:18:19.269 "peer_address": { 00:18:19.269 "trtype": "TCP", 00:18:19.269 "adrfam": "IPv4", 00:18:19.269 "traddr": "10.0.0.1", 00:18:19.269 "trsvcid": "58052" 00:18:19.269 }, 00:18:19.269 "auth": { 00:18:19.269 "state": "completed", 00:18:19.269 "digest": "sha512", 00:18:19.269 "dhgroup": "ffdhe6144" 00:18:19.269 } 00:18:19.269 } 00:18:19.269 ]' 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.269 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.526 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.526 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.526 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.526 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.526 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.527 12:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.092 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.350 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.351 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.351 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.609 00:18:20.609 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.609 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.609 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.867 { 00:18:20.867 "cntlid": 135, 00:18:20.867 "qid": 0, 00:18:20.867 "state": "enabled", 00:18:20.867 "thread": "nvmf_tgt_poll_group_000", 00:18:20.867 "listen_address": { 00:18:20.867 "trtype": "TCP", 00:18:20.867 "adrfam": "IPv4", 00:18:20.867 "traddr": "10.0.0.2", 00:18:20.867 "trsvcid": "4420" 00:18:20.867 }, 00:18:20.867 "peer_address": { 00:18:20.867 "trtype": "TCP", 00:18:20.867 "adrfam": "IPv4", 00:18:20.867 "traddr": "10.0.0.1", 00:18:20.867 "trsvcid": "58076" 00:18:20.867 }, 00:18:20.867 "auth": { 00:18:20.867 "state": "completed", 00:18:20.867 "digest": "sha512", 00:18:20.867 "dhgroup": "ffdhe6144" 00:18:20.867 } 00:18:20.867 } 00:18:20.867 ]' 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.867 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.125 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.125 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.125 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.125 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.125 12:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.125 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.692 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.951 12:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.518 00:18:22.518 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.518 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.518 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.777 { 00:18:22.777 "cntlid": 137, 00:18:22.777 "qid": 0, 00:18:22.777 "state": "enabled", 00:18:22.777 "thread": "nvmf_tgt_poll_group_000", 00:18:22.777 "listen_address": { 00:18:22.777 "trtype": "TCP", 00:18:22.777 "adrfam": "IPv4", 00:18:22.777 "traddr": "10.0.0.2", 00:18:22.777 "trsvcid": "4420" 00:18:22.777 }, 00:18:22.777 "peer_address": { 00:18:22.777 "trtype": "TCP", 00:18:22.777 "adrfam": "IPv4", 00:18:22.777 "traddr": "10.0.0.1", 00:18:22.777 "trsvcid": "58098" 00:18:22.777 }, 00:18:22.777 "auth": { 00:18:22.777 "state": "completed", 00:18:22.777 "digest": "sha512", 00:18:22.777 "dhgroup": "ffdhe8192" 00:18:22.777 } 00:18:22.777 } 00:18:22.777 ]' 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.777 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.035 12:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.603 12:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.172 00:18:24.172 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.172 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.172 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.475 { 00:18:24.475 "cntlid": 139, 00:18:24.475 "qid": 0, 00:18:24.475 "state": "enabled", 00:18:24.475 "thread": "nvmf_tgt_poll_group_000", 00:18:24.475 "listen_address": { 00:18:24.475 "trtype": "TCP", 00:18:24.475 "adrfam": "IPv4", 00:18:24.475 "traddr": "10.0.0.2", 00:18:24.475 "trsvcid": "4420" 00:18:24.475 }, 00:18:24.475 "peer_address": { 00:18:24.475 "trtype": "TCP", 00:18:24.475 "adrfam": "IPv4", 00:18:24.475 "traddr": "10.0.0.1", 00:18:24.475 "trsvcid": "58122" 00:18:24.475 }, 00:18:24.475 "auth": { 00:18:24.475 "state": "completed", 00:18:24.475 "digest": "sha512", 00:18:24.475 "dhgroup": "ffdhe8192" 00:18:24.475 } 00:18:24.475 } 00:18:24.475 ]' 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.475 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.476 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.476 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.476 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.476 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.476 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.476 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.735 12:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTZjMWUwN2MxODJhYjhlMGJhOTZkYjI5Nzk4NzZkZmUdEub0: --dhchap-ctrl-secret DHHC-1:02:NzY5ZjBmYTEzY2VkM2MwMGM5MzcxZGM2ZjBiY2FlZGI2YjA5Y2ZkM2Q0MDU5MWMw6DuDZg==: 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.303 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.562 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.820 00:18:25.821 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.821 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.821 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.079 { 00:18:26.079 "cntlid": 141, 00:18:26.079 "qid": 0, 00:18:26.079 "state": "enabled", 00:18:26.079 "thread": "nvmf_tgt_poll_group_000", 00:18:26.079 "listen_address": { 00:18:26.079 "trtype": "TCP", 00:18:26.079 "adrfam": "IPv4", 00:18:26.079 "traddr": "10.0.0.2", 00:18:26.079 "trsvcid": "4420" 00:18:26.079 }, 00:18:26.079 "peer_address": { 00:18:26.079 "trtype": "TCP", 00:18:26.079 "adrfam": "IPv4", 00:18:26.079 "traddr": "10.0.0.1", 00:18:26.079 "trsvcid": "58132" 00:18:26.079 }, 00:18:26.079 "auth": { 00:18:26.079 "state": "completed", 00:18:26.079 "digest": "sha512", 00:18:26.079 "dhgroup": "ffdhe8192" 00:18:26.079 } 00:18:26.079 } 00:18:26.079 ]' 00:18:26.079 12:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.079 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.079 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.337 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.337 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.337 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.337 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.337 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.337 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmIyMjcxMzA5ZmM4M2UyMDkwY2JjNDRlZWE2NWY3MzY4M2FmYzE1OTAyM2ExY2E4wkRXEQ==: --dhchap-ctrl-secret DHHC-1:01:ZmVjMjk1YTE4MzhmNmI1MjcyNGE2MGQzYzNmY2M3MzLA+yzy: 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.904 12:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.163 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.730 00:18:27.730 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.730 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.730 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.989 { 00:18:27.989 "cntlid": 143, 00:18:27.989 "qid": 0, 00:18:27.989 "state": "enabled", 00:18:27.989 "thread": "nvmf_tgt_poll_group_000", 00:18:27.989 "listen_address": { 00:18:27.989 "trtype": "TCP", 00:18:27.989 "adrfam": "IPv4", 00:18:27.989 "traddr": "10.0.0.2", 00:18:27.989 "trsvcid": "4420" 00:18:27.989 }, 00:18:27.989 "peer_address": { 00:18:27.989 "trtype": "TCP", 00:18:27.989 "adrfam": "IPv4", 00:18:27.989 "traddr": "10.0.0.1", 00:18:27.989 "trsvcid": "36262" 00:18:27.989 }, 00:18:27.989 "auth": { 00:18:27.989 "state": "completed", 00:18:27.989 "digest": "sha512", 00:18:27.989 "dhgroup": "ffdhe8192" 00:18:27.989 } 00:18:27.989 } 00:18:27.989 ]' 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.989 12:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.248 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.816 12:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.385 00:18:29.385 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.385 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.385 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.644 { 00:18:29.644 "cntlid": 145, 00:18:29.644 "qid": 0, 00:18:29.644 "state": "enabled", 00:18:29.644 "thread": "nvmf_tgt_poll_group_000", 00:18:29.644 "listen_address": { 00:18:29.644 "trtype": "TCP", 00:18:29.644 "adrfam": "IPv4", 00:18:29.644 "traddr": "10.0.0.2", 00:18:29.644 "trsvcid": "4420" 00:18:29.644 }, 00:18:29.644 "peer_address": { 00:18:29.644 "trtype": "TCP", 00:18:29.644 "adrfam": "IPv4", 00:18:29.644 "traddr": "10.0.0.1", 00:18:29.644 "trsvcid": "36302" 00:18:29.644 }, 00:18:29.644 "auth": { 00:18:29.644 "state": "completed", 00:18:29.644 "digest": "sha512", 00:18:29.644 "dhgroup": "ffdhe8192" 00:18:29.644 } 00:18:29.644 } 00:18:29.644 ]' 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.644 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.903 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.903 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.903 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.903 12:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Nzk4NjhmY2VhNTNmODY5OTQwZTkwYmI1MWM1ODEwNWE1OTcxNzRkOTA0MGE3OTVhSNfzBQ==: --dhchap-ctrl-secret DHHC-1:03:MDQ1NTgxZWVhM2UyZDMwNTE1ZjcxZDVmMTQ1ZWVjNzMxNzE4NjdlZjIyNjZlNzQxNjIyMmZlNmQxOGI5MjdiMDjfIWs=: 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.470 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:30.471 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.471 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.471 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.038 request: 00:18:31.038 { 00:18:31.038 "name": "nvme0", 00:18:31.038 "trtype": "tcp", 00:18:31.038 "traddr": "10.0.0.2", 00:18:31.038 "adrfam": "ipv4", 00:18:31.038 "trsvcid": "4420", 00:18:31.038 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:31.038 "prchk_reftag": false, 00:18:31.038 "prchk_guard": false, 00:18:31.038 "hdgst": false, 00:18:31.038 "ddgst": false, 00:18:31.039 "dhchap_key": "key2", 00:18:31.039 "method": "bdev_nvme_attach_controller", 00:18:31.039 "req_id": 1 00:18:31.039 } 00:18:31.039 Got JSON-RPC error response 00:18:31.039 response: 00:18:31.039 { 00:18:31.039 "code": -5, 00:18:31.039 "message": "Input/output error" 00:18:31.039 } 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.039 12:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:31.297 request: 00:18:31.297 { 00:18:31.297 "name": "nvme0", 00:18:31.297 "trtype": "tcp", 00:18:31.297 "traddr": "10.0.0.2", 00:18:31.297 "adrfam": "ipv4", 00:18:31.297 "trsvcid": "4420", 00:18:31.297 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:31.297 "prchk_reftag": false, 00:18:31.297 "prchk_guard": false, 00:18:31.297 "hdgst": false, 00:18:31.297 "ddgst": false, 00:18:31.297 "dhchap_key": "key1", 00:18:31.297 "dhchap_ctrlr_key": "ckey2", 00:18:31.297 "method": "bdev_nvme_attach_controller", 00:18:31.297 "req_id": 1 00:18:31.297 } 00:18:31.297 Got JSON-RPC error response 00:18:31.297 response: 00:18:31.297 { 00:18:31.297 "code": -5, 00:18:31.297 "message": "Input/output error" 00:18:31.297 } 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.556 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.815 request: 00:18:31.815 { 00:18:31.815 "name": "nvme0", 00:18:31.815 "trtype": "tcp", 00:18:31.815 "traddr": "10.0.0.2", 00:18:31.815 "adrfam": "ipv4", 00:18:31.815 "trsvcid": "4420", 00:18:31.815 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:31.815 "prchk_reftag": false, 00:18:31.815 "prchk_guard": false, 00:18:31.815 "hdgst": false, 00:18:31.815 "ddgst": false, 00:18:31.815 "dhchap_key": "key1", 00:18:31.815 "dhchap_ctrlr_key": "ckey1", 00:18:31.815 "method": "bdev_nvme_attach_controller", 00:18:31.815 "req_id": 1 00:18:31.815 } 00:18:31.815 Got JSON-RPC error response 00:18:31.815 response: 00:18:31.815 { 00:18:31.815 "code": -5, 00:18:31.815 "message": "Input/output error" 00:18:31.815 } 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1711414 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1711414 ']' 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1711414 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.815 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1711414 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1711414' 00:18:32.075 killing process with pid 1711414 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1711414 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1711414 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1732467 00:18:32.075 12:53:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1732467 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1732467 ']' 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.076 12:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1732467 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1732467 ']' 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.036 12:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.295 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.863 00:18:33.863 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.863 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.863 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.122 { 00:18:34.122 "cntlid": 1, 00:18:34.122 "qid": 0, 00:18:34.122 "state": "enabled", 00:18:34.122 "thread": "nvmf_tgt_poll_group_000", 00:18:34.122 "listen_address": { 00:18:34.122 "trtype": "TCP", 00:18:34.122 "adrfam": "IPv4", 00:18:34.122 "traddr": "10.0.0.2", 00:18:34.122 "trsvcid": "4420" 00:18:34.122 }, 00:18:34.122 "peer_address": { 00:18:34.122 "trtype": "TCP", 00:18:34.122 "adrfam": "IPv4", 00:18:34.122 "traddr": "10.0.0.1", 00:18:34.122 "trsvcid": "36354" 00:18:34.122 }, 00:18:34.122 "auth": { 00:18:34.122 "state": "completed", 00:18:34.122 "digest": "sha512", 00:18:34.122 "dhgroup": "ffdhe8192" 00:18:34.122 } 00:18:34.122 } 00:18:34.122 ]' 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.122 12:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.381 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTMwNzk0MzYwZDg0YzBhNjcwNmNiNTlhYTU0NDgxODNkMWM1YmI0Mzg0ZTg5ODAxNWY0MjA4NDNhNDUyODk1ZBFd/gI=: 00:18:34.949 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.949 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:34.949 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:34.950 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.209 12:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.209 request: 00:18:35.209 { 00:18:35.209 "name": "nvme0", 00:18:35.209 "trtype": "tcp", 00:18:35.209 "traddr": "10.0.0.2", 00:18:35.209 "adrfam": "ipv4", 00:18:35.209 "trsvcid": "4420", 00:18:35.209 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:35.209 "prchk_reftag": false, 00:18:35.209 "prchk_guard": false, 00:18:35.209 "hdgst": false, 00:18:35.209 "ddgst": false, 00:18:35.209 "dhchap_key": "key3", 00:18:35.209 "method": "bdev_nvme_attach_controller", 00:18:35.209 "req_id": 1 00:18:35.209 } 00:18:35.209 Got JSON-RPC error response 00:18:35.209 response: 00:18:35.209 { 00:18:35.209 "code": -5, 00:18:35.209 "message": "Input/output error" 00:18:35.209 } 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:35.209 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.468 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.726 request: 00:18:35.726 { 00:18:35.726 "name": "nvme0", 00:18:35.726 "trtype": "tcp", 00:18:35.726 "traddr": "10.0.0.2", 00:18:35.726 "adrfam": "ipv4", 00:18:35.726 "trsvcid": "4420", 00:18:35.726 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:35.726 "prchk_reftag": false, 00:18:35.726 "prchk_guard": false, 00:18:35.726 "hdgst": false, 00:18:35.726 "ddgst": false, 00:18:35.726 "dhchap_key": "key3", 00:18:35.726 "method": "bdev_nvme_attach_controller", 00:18:35.726 "req_id": 1 00:18:35.727 } 00:18:35.727 Got JSON-RPC error response 00:18:35.727 response: 00:18:35.727 { 00:18:35.727 "code": -5, 00:18:35.727 "message": "Input/output error" 00:18:35.727 } 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.727 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.985 request: 00:18:35.985 { 00:18:35.985 "name": "nvme0", 00:18:35.985 "trtype": "tcp", 00:18:35.985 "traddr": "10.0.0.2", 00:18:35.985 "adrfam": "ipv4", 00:18:35.985 "trsvcid": "4420", 00:18:35.985 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:35.985 "prchk_reftag": false, 00:18:35.985 "prchk_guard": false, 00:18:35.985 "hdgst": false, 00:18:35.985 "ddgst": false, 00:18:35.985 "dhchap_key": "key0", 00:18:35.985 "dhchap_ctrlr_key": "key1", 00:18:35.985 "method": "bdev_nvme_attach_controller", 00:18:35.985 "req_id": 1 00:18:35.985 } 00:18:35.985 Got JSON-RPC error response 00:18:35.985 response: 00:18:35.985 { 00:18:35.985 "code": -5, 00:18:35.985 "message": "Input/output error" 00:18:35.985 } 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:35.985 12:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:36.244 00:18:36.244 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:36.244 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:36.244 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.503 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.503 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.503 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1711652 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1711652 ']' 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1711652 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1711652 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1711652' 00:18:36.761 killing process with pid 1711652 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1711652 00:18:36.761 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1711652 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.019 rmmod nvme_tcp 00:18:37.019 rmmod nvme_fabrics 00:18:37.019 rmmod nvme_keyring 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1732467 ']' 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1732467 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1732467 ']' 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1732467 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1732467 00:18:37.019 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:37.020 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:37.020 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1732467' 00:18:37.020 killing process with pid 1732467 00:18:37.020 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1732467 00:18:37.020 12:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1732467 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.278 12:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.809 12:53:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.809 12:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.E6Z /tmp/spdk.key-sha256.VR8 /tmp/spdk.key-sha384.YAP /tmp/spdk.key-sha512.Rjj /tmp/spdk.key-sha512.fU1 /tmp/spdk.key-sha384.mNA /tmp/spdk.key-sha256.4pW '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:39.809 00:18:39.809 real 2m12.887s 00:18:39.809 user 5m4.977s 00:18:39.809 sys 0m21.055s 00:18:39.809 12:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:39.809 12:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.809 ************************************ 00:18:39.809 END TEST nvmf_auth_target 00:18:39.809 ************************************ 00:18:39.809 12:53:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:39.809 12:53:10 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:39.809 12:53:10 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:39.809 12:53:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:39.809 12:53:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.809 12:53:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:39.809 ************************************ 00:18:39.809 START TEST nvmf_bdevio_no_huge 00:18:39.809 ************************************ 00:18:39.809 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:39.809 * Looking for test storage... 00:18:39.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.809 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.809 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:39.809 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.809 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.809 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.810 12:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:45.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:45.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.145 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:45.146 Found net devices under 0000:86:00.0: cvl_0_0 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:45.146 Found net devices under 0000:86:00.1: cvl_0_1 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.146 12:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.146 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.146 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.146 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:18:45.405 00:18:45.405 --- 10.0.0.2 ping statistics --- 00:18:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.405 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:45.405 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:18:45.405 00:18:45.405 --- 10.0.0.1 ping statistics --- 00:18:45.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.406 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1736888 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1736888 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1736888 ']' 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.406 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.406 [2024-07-15 12:53:16.183236] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:45.406 [2024-07-15 12:53:16.183287] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:45.406 [2024-07-15 12:53:16.261911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.406 [2024-07-15 12:53:16.347221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.406 [2024-07-15 12:53:16.347259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.406 [2024-07-15 12:53:16.347266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.406 [2024-07-15 12:53:16.347272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.406 [2024-07-15 12:53:16.347277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.406 [2024-07-15 12:53:16.347377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.406 [2024-07-15 12:53:16.347487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:45.406 [2024-07-15 12:53:16.347593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.406 [2024-07-15 12:53:16.347594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:46.345 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.345 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:46.345 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:46.345 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:46.345 12:53:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.345 [2024-07-15 12:53:17.035062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.345 Malloc0 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.345 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.346 [2024-07-15 12:53:17.079312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:46.346 { 00:18:46.346 "params": { 00:18:46.346 "name": "Nvme$subsystem", 00:18:46.346 "trtype": "$TEST_TRANSPORT", 00:18:46.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.346 "adrfam": "ipv4", 00:18:46.346 "trsvcid": "$NVMF_PORT", 00:18:46.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.346 "hdgst": ${hdgst:-false}, 00:18:46.346 "ddgst": ${ddgst:-false} 00:18:46.346 }, 00:18:46.346 "method": "bdev_nvme_attach_controller" 00:18:46.346 } 00:18:46.346 EOF 00:18:46.346 )") 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:46.346 12:53:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:46.346 "params": { 00:18:46.346 "name": "Nvme1", 00:18:46.346 "trtype": "tcp", 00:18:46.346 "traddr": "10.0.0.2", 00:18:46.346 "adrfam": "ipv4", 00:18:46.346 "trsvcid": "4420", 00:18:46.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.346 "hdgst": false, 00:18:46.346 "ddgst": false 00:18:46.346 }, 00:18:46.346 "method": "bdev_nvme_attach_controller" 00:18:46.346 }' 00:18:46.346 [2024-07-15 12:53:17.128959] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:46.346 [2024-07-15 12:53:17.129010] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1736928 ] 00:18:46.346 [2024-07-15 12:53:17.202049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:46.346 [2024-07-15 12:53:17.288965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.346 [2024-07-15 12:53:17.289072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.346 [2024-07-15 12:53:17.289072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.605 I/O targets: 00:18:46.605 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:46.605 00:18:46.605 00:18:46.605 CUnit - A unit testing framework for C - Version 2.1-3 00:18:46.605 http://cunit.sourceforge.net/ 00:18:46.605 00:18:46.605 00:18:46.605 Suite: bdevio tests on: Nvme1n1 00:18:46.605 Test: blockdev write read block ...passed 00:18:46.864 Test: blockdev write zeroes read block ...passed 00:18:46.864 Test: blockdev write zeroes read no split ...passed 00:18:46.864 Test: blockdev write zeroes read split ...passed 00:18:46.864 Test: blockdev write zeroes read split partial ...passed 00:18:46.864 Test: blockdev reset ...[2024-07-15 12:53:17.680018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.864 [2024-07-15 12:53:17.680081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f7300 (9): Bad file descriptor 00:18:46.864 [2024-07-15 12:53:17.697535] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.864 passed 00:18:46.864 Test: blockdev write read 8 blocks ...passed 00:18:46.864 Test: blockdev write read size > 128k ...passed 00:18:46.864 Test: blockdev write read invalid size ...passed 00:18:46.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:46.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:46.864 Test: blockdev write read max offset ...passed 00:18:47.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:47.122 Test: blockdev writev readv 8 blocks ...passed 00:18:47.122 Test: blockdev writev readv 30 x 1block ...passed 00:18:47.122 Test: blockdev writev readv block ...passed 00:18:47.122 Test: blockdev writev readv size > 128k ...passed 00:18:47.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:47.122 Test: blockdev comparev and writev ...[2024-07-15 12:53:17.872336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.122 [2024-07-15 12:53:17.872364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.872377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.872385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.872648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.872671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.872678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.872931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.872941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.872952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.872962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.873201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.873213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.873228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:47.123 [2024-07-15 12:53:17.873235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:47.123 passed 00:18:47.123 Test: blockdev nvme passthru rw ...passed 00:18:47.123 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:53:17.956621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.123 [2024-07-15 12:53:17.956636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.956764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.123 [2024-07-15 12:53:17.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.956896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.123 [2024-07-15 12:53:17.956907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:47.123 [2024-07-15 12:53:17.957027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.123 [2024-07-15 12:53:17.957037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:47.123 passed 00:18:47.123 Test: blockdev nvme admin passthru ...passed 00:18:47.123 Test: blockdev copy ...passed 00:18:47.123 00:18:47.123 Run Summary: Type Total Ran Passed Failed Inactive 00:18:47.123 suites 1 1 n/a 0 0 00:18:47.123 tests 23 23 23 0 0 00:18:47.123 asserts 152 152 152 0 n/a 00:18:47.123 00:18:47.123 Elapsed time = 1.096 seconds 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.382 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.382 rmmod nvme_tcp 00:18:47.382 rmmod nvme_fabrics 00:18:47.382 rmmod nvme_keyring 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1736888 ']' 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1736888 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1736888 ']' 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1736888 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736888 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736888' 00:18:47.641 killing process with pid 1736888 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1736888 00:18:47.641 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1736888 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.900 12:53:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.438 12:53:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.438 00:18:50.438 real 0m10.489s 00:18:50.438 user 0m12.868s 00:18:50.438 sys 0m5.206s 00:18:50.438 12:53:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.438 12:53:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.438 ************************************ 00:18:50.438 END TEST nvmf_bdevio_no_huge 00:18:50.439 ************************************ 00:18:50.439 12:53:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:50.439 12:53:20 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:50.439 12:53:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:50.439 12:53:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.439 12:53:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:50.439 ************************************ 00:18:50.439 START TEST nvmf_tls 00:18:50.439 ************************************ 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:50.439 * Looking for test storage... 00:18:50.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.439 12:53:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:55.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:55.712 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:55.712 Found net devices under 0000:86:00.0: cvl_0_0 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:55.712 Found net devices under 0000:86:00.1: cvl_0_1 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.712 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:18:55.970 00:18:55.970 --- 10.0.0.2 ping statistics --- 00:18:55.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.970 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:18:55.970 00:18:55.970 --- 10.0.0.1 ping statistics --- 00:18:55.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.970 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1740672 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1740672 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1740672 ']' 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.970 12:53:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.970 [2024-07-15 12:53:26.774634] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:55.970 [2024-07-15 12:53:26.774678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.970 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.970 [2024-07-15 12:53:26.843162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.970 [2024-07-15 12:53:26.921555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.970 [2024-07-15 12:53:26.921588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.970 [2024-07-15 12:53:26.921595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.970 [2024-07-15 12:53:26.921605] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.970 [2024-07-15 12:53:26.921611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.970 [2024-07-15 12:53:26.921635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.904 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:56.905 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:56.905 true 00:18:56.905 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.905 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:57.162 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:57.162 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:57.162 12:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:57.419 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.419 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:57.419 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:57.419 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:57.419 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:57.676 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.676 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:57.933 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:57.933 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:57.933 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:57.933 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:58.191 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:58.191 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:58.191 12:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:58.191 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.191 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:58.450 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:58.450 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:58.450 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:58.450 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.450 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xiaTmRTwvP 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.1LQ87R31W7 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xiaTmRTwvP 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1LQ87R31W7 00:18:58.708 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:58.967 12:53:29 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:59.225 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xiaTmRTwvP 00:18:59.225 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xiaTmRTwvP 00:18:59.225 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.484 [2024-07-15 12:53:30.233331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.484 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:59.484 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:59.743 [2024-07-15 12:53:30.562173] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.743 [2024-07-15 12:53:30.562360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.743 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.002 malloc0 00:19:00.002 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.002 12:53:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xiaTmRTwvP 00:19:00.261 [2024-07-15 12:53:31.083792] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.261 12:53:31 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xiaTmRTwvP 00:19:00.261 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.236 Initializing NVMe Controllers 00:19:10.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:10.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:10.236 Initialization complete. Launching workers. 00:19:10.236 ======================================================== 00:19:10.236 Latency(us) 00:19:10.236 Device Information : IOPS MiB/s Average min max 00:19:10.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16486.40 64.40 3882.45 809.19 6507.31 00:19:10.236 ======================================================== 00:19:10.236 Total : 16486.40 64.40 3882.45 809.19 6507.31 00:19:10.236 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiaTmRTwvP 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xiaTmRTwvP' 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1743028 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1743028 /var/tmp/bdevperf.sock 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1743028 ']' 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.495 12:53:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.495 [2024-07-15 12:53:41.239332] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:10.495 [2024-07-15 12:53:41.239382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743028 ] 00:19:10.495 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.495 [2024-07-15 12:53:41.307887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.495 [2024-07-15 12:53:41.387011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.433 12:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.433 12:53:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:11.433 12:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xiaTmRTwvP 00:19:11.433 [2024-07-15 12:53:42.204999] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.433 [2024-07-15 12:53:42.205066] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:11.433 TLSTESTn1 00:19:11.433 12:53:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:11.692 Running I/O for 10 seconds... 00:19:21.743 00:19:21.743 Latency(us) 00:19:21.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.743 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.743 Verification LBA range: start 0x0 length 0x2000 00:19:21.743 TLSTESTn1 : 10.02 3615.73 14.12 0.00 0.00 35350.90 6981.01 54936.26 00:19:21.744 =================================================================================================================== 00:19:21.744 Total : 3615.73 14.12 0.00 0.00 35350.90 6981.01 54936.26 00:19:21.744 0 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1743028 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1743028 ']' 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1743028 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743028 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743028' 00:19:21.744 killing process with pid 1743028 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1743028 00:19:21.744 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.744 00:19:21.744 Latency(us) 00:19:21.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.744 =================================================================================================================== 00:19:21.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.744 [2024-07-15 12:53:52.503019] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1743028 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1LQ87R31W7 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1LQ87R31W7 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1LQ87R31W7 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1LQ87R31W7' 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1744923 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1744923 /var/tmp/bdevperf.sock 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1744923 ']' 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.744 12:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.003 [2024-07-15 12:53:52.730968] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:22.003 [2024-07-15 12:53:52.731017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744923 ] 00:19:22.003 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.003 [2024-07-15 12:53:52.798676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.003 [2024-07-15 12:53:52.878309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.938 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.938 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:22.938 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1LQ87R31W7 00:19:22.938 [2024-07-15 12:53:53.697306] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.939 [2024-07-15 12:53:53.697372] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:22.939 [2024-07-15 12:53:53.708011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:22.939 [2024-07-15 12:53:53.708623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb570 (107): Transport endpoint is not connected 00:19:22.939 [2024-07-15 12:53:53.709618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abb570 (9): Bad file descriptor 00:19:22.939 [2024-07-15 12:53:53.710618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.939 [2024-07-15 12:53:53.710628] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:22.939 [2024-07-15 12:53:53.710637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.939 request: 00:19:22.939 { 00:19:22.939 "name": "TLSTEST", 00:19:22.939 "trtype": "tcp", 00:19:22.939 "traddr": "10.0.0.2", 00:19:22.939 "adrfam": "ipv4", 00:19:22.939 "trsvcid": "4420", 00:19:22.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.939 "prchk_reftag": false, 00:19:22.939 "prchk_guard": false, 00:19:22.939 "hdgst": false, 00:19:22.939 "ddgst": false, 00:19:22.939 "psk": "/tmp/tmp.1LQ87R31W7", 00:19:22.939 "method": "bdev_nvme_attach_controller", 00:19:22.939 "req_id": 1 00:19:22.939 } 00:19:22.939 Got JSON-RPC error response 00:19:22.939 response: 00:19:22.939 { 00:19:22.939 "code": -5, 00:19:22.939 "message": "Input/output error" 00:19:22.939 } 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1744923 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1744923 ']' 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1744923 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744923 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744923' 00:19:22.939 killing process with pid 1744923 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1744923 00:19:22.939 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.939 00:19:22.939 Latency(us) 00:19:22.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.939 =================================================================================================================== 00:19:22.939 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.939 [2024-07-15 12:53:53.785351] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:22.939 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1744923 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xiaTmRTwvP 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xiaTmRTwvP 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xiaTmRTwvP 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xiaTmRTwvP' 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1745101 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1745101 /var/tmp/bdevperf.sock 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1745101 ']' 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.198 12:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.198 [2024-07-15 12:53:54.006627] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:23.198 [2024-07-15 12:53:54.006674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745101 ] 00:19:23.198 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.198 [2024-07-15 12:53:54.074755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.198 [2024-07-15 12:53:54.145835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.134 12:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.134 12:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:24.134 12:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xiaTmRTwvP 00:19:24.134 [2024-07-15 12:53:54.979861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.134 [2024-07-15 12:53:54.979933] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:24.134 [2024-07-15 12:53:54.991197] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.134 [2024-07-15 12:53:54.991219] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.134 [2024-07-15 12:53:54.991249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.134 [2024-07-15 12:53:54.992150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1570 (107): Transport endpoint is not connected 00:19:24.134 [2024-07-15 12:53:54.993144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1570 (9): Bad file descriptor 00:19:24.134 [2024-07-15 12:53:54.994145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.134 [2024-07-15 12:53:54.994153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.134 [2024-07-15 12:53:54.994161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.134 request: 00:19:24.134 { 00:19:24.134 "name": "TLSTEST", 00:19:24.134 "trtype": "tcp", 00:19:24.134 "traddr": "10.0.0.2", 00:19:24.134 "adrfam": "ipv4", 00:19:24.134 "trsvcid": "4420", 00:19:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.134 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:24.134 "prchk_reftag": false, 00:19:24.134 "prchk_guard": false, 00:19:24.134 "hdgst": false, 00:19:24.134 "ddgst": false, 00:19:24.135 "psk": "/tmp/tmp.xiaTmRTwvP", 00:19:24.135 "method": "bdev_nvme_attach_controller", 00:19:24.135 "req_id": 1 00:19:24.135 } 00:19:24.135 Got JSON-RPC error response 00:19:24.135 response: 00:19:24.135 { 00:19:24.135 "code": -5, 00:19:24.135 "message": "Input/output error" 00:19:24.135 } 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1745101 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1745101 ']' 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1745101 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745101 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745101' 00:19:24.135 killing process with pid 1745101 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1745101 00:19:24.135 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.135 00:19:24.135 Latency(us) 00:19:24.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.135 =================================================================================================================== 00:19:24.135 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.135 [2024-07-15 12:53:55.062869] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:24.135 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1745101 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiaTmRTwvP 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiaTmRTwvP 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiaTmRTwvP 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xiaTmRTwvP' 00:19:24.393 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1745333 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1745333 /var/tmp/bdevperf.sock 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1745333 ']' 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.394 12:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.394 [2024-07-15 12:53:55.287241] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:24.394 [2024-07-15 12:53:55.287288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745333 ] 00:19:24.394 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.652 [2024-07-15 12:53:55.352672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.652 [2024-07-15 12:53:55.423159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.219 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.219 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:25.219 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xiaTmRTwvP 00:19:25.478 [2024-07-15 12:53:56.238055] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.478 [2024-07-15 12:53:56.238128] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.478 [2024-07-15 12:53:56.244398] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.478 [2024-07-15 12:53:56.244420] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.478 [2024-07-15 12:53:56.244444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.478 [2024-07-15 12:53:56.245442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1570 (107): Transport endpoint is not connected 00:19:25.478 [2024-07-15 12:53:56.246436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1570 (9): Bad file descriptor 00:19:25.478 [2024-07-15 12:53:56.247438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:25.478 [2024-07-15 12:53:56.247448] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.478 [2024-07-15 12:53:56.247457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:25.478 request: 00:19:25.478 { 00:19:25.478 "name": "TLSTEST", 00:19:25.478 "trtype": "tcp", 00:19:25.478 "traddr": "10.0.0.2", 00:19:25.478 "adrfam": "ipv4", 00:19:25.478 "trsvcid": "4420", 00:19:25.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.478 "prchk_reftag": false, 00:19:25.478 "prchk_guard": false, 00:19:25.478 "hdgst": false, 00:19:25.478 "ddgst": false, 00:19:25.478 "psk": "/tmp/tmp.xiaTmRTwvP", 00:19:25.478 "method": "bdev_nvme_attach_controller", 00:19:25.478 "req_id": 1 00:19:25.478 } 00:19:25.478 Got JSON-RPC error response 00:19:25.478 response: 00:19:25.478 { 00:19:25.478 "code": -5, 00:19:25.478 "message": "Input/output error" 00:19:25.478 } 00:19:25.478 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1745333 00:19:25.478 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1745333 ']' 00:19:25.478 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1745333 00:19:25.478 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:25.478 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.479 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745333 00:19:25.479 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:25.479 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:25.479 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745333' 00:19:25.479 killing process with pid 1745333 00:19:25.479 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1745333 00:19:25.479 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.479 00:19:25.479 Latency(us) 00:19:25.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.479 =================================================================================================================== 00:19:25.479 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.479 [2024-07-15 12:53:56.317764] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:25.479 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1745333 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1745579 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1745579 /var/tmp/bdevperf.sock 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1745579 ']' 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.738 12:53:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.738 [2024-07-15 12:53:56.535919] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:25.738 [2024-07-15 12:53:56.535966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745579 ] 00:19:25.738 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.738 [2024-07-15 12:53:56.601071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.738 [2024-07-15 12:53:56.669002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.673 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.673 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.673 12:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:26.673 [2024-07-15 12:53:57.497604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.673 [2024-07-15 12:53:57.499382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1258af0 (9): Bad file descriptor 00:19:26.673 [2024-07-15 12:53:57.500381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.673 [2024-07-15 12:53:57.500393] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.673 [2024-07-15 12:53:57.500402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.673 request: 00:19:26.673 { 00:19:26.674 "name": "TLSTEST", 00:19:26.674 "trtype": "tcp", 00:19:26.674 "traddr": "10.0.0.2", 00:19:26.674 "adrfam": "ipv4", 00:19:26.674 "trsvcid": "4420", 00:19:26.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.674 "prchk_reftag": false, 00:19:26.674 "prchk_guard": false, 00:19:26.674 "hdgst": false, 00:19:26.674 "ddgst": false, 00:19:26.674 "method": "bdev_nvme_attach_controller", 00:19:26.674 "req_id": 1 00:19:26.674 } 00:19:26.674 Got JSON-RPC error response 00:19:26.674 response: 00:19:26.674 { 00:19:26.674 "code": -5, 00:19:26.674 "message": "Input/output error" 00:19:26.674 } 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1745579 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1745579 ']' 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1745579 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745579 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745579' 00:19:26.674 killing process with pid 1745579 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1745579 00:19:26.674 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.674 00:19:26.674 Latency(us) 00:19:26.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.674 =================================================================================================================== 00:19:26.674 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.674 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1745579 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1740672 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1740672 ']' 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1740672 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1740672 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1740672' 00:19:26.932 killing process with pid 1740672 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1740672 00:19:26.932 [2024-07-15 12:53:57.791987] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:26.932 12:53:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1740672 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:27.201 12:53:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.B8of0O0mlK 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.B8of0O0mlK 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1745825 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1745825 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1745825 ']' 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.201 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.202 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.202 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.202 [2024-07-15 12:53:58.085249] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:27.202 [2024-07-15 12:53:58.085295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.202 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.202 [2024-07-15 12:53:58.154105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.461 [2024-07-15 12:53:58.231519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.461 [2024-07-15 12:53:58.231560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.461 [2024-07-15 12:53:58.231567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.461 [2024-07-15 12:53:58.231573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.461 [2024-07-15 12:53:58.231579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.461 [2024-07-15 12:53:58.231597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.B8of0O0mlK 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.B8of0O0mlK 00:19:28.028 12:53:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.287 [2024-07-15 12:53:59.082581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.287 12:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.545 12:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.545 [2024-07-15 12:53:59.443492] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.545 [2024-07-15 12:53:59.443669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.545 12:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.804 malloc0 00:19:28.804 12:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.066 12:53:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:19:29.066 [2024-07-15 12:53:59.977070] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8of0O0mlK 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.B8of0O0mlK' 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1746302 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1746302 /var/tmp/bdevperf.sock 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1746302 ']' 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.066 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.325 [2024-07-15 12:54:00.042358] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:29.326 [2024-07-15 12:54:00.042412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746302 ] 00:19:29.326 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.326 [2024-07-15 12:54:00.109491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.326 [2024-07-15 12:54:00.183066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.262 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.262 12:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.262 12:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:19:30.262 [2024-07-15 12:54:01.037295] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.262 [2024-07-15 12:54:01.037373] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:30.262 TLSTESTn1 00:19:30.262 12:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.521 Running I/O for 10 seconds... 00:19:40.506 00:19:40.506 Latency(us) 00:19:40.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.506 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.506 Verification LBA range: start 0x0 length 0x2000 00:19:40.506 TLSTESTn1 : 10.01 5516.89 21.55 0.00 0.00 23163.57 6012.22 40575.33 00:19:40.506 =================================================================================================================== 00:19:40.506 Total : 5516.89 21.55 0.00 0.00 23163.57 6012.22 40575.33 00:19:40.506 0 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1746302 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1746302 ']' 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1746302 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1746302 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1746302' 00:19:40.506 killing process with pid 1746302 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1746302 00:19:40.506 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.506 00:19:40.506 Latency(us) 00:19:40.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.506 =================================================================================================================== 00:19:40.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.506 [2024-07-15 12:54:11.327952] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:40.506 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1746302 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.B8of0O0mlK 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8of0O0mlK 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8of0O0mlK 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B8of0O0mlK 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.B8of0O0mlK' 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1748650 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1748650 /var/tmp/bdevperf.sock 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1748650 ']' 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.765 12:54:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.765 [2024-07-15 12:54:11.562565] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:40.765 [2024-07-15 12:54:11.562617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748650 ] 00:19:40.765 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.765 [2024-07-15 12:54:11.630706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.765 [2024-07-15 12:54:11.700062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.701 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.701 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:41.701 12:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:19:41.701 [2024-07-15 12:54:12.526908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.701 [2024-07-15 12:54:12.526959] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:41.701 [2024-07-15 12:54:12.526966] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.B8of0O0mlK 00:19:41.701 request: 00:19:41.701 { 00:19:41.701 "name": "TLSTEST", 00:19:41.701 "trtype": "tcp", 00:19:41.701 "traddr": "10.0.0.2", 00:19:41.701 "adrfam": "ipv4", 00:19:41.701 "trsvcid": "4420", 00:19:41.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.701 "prchk_reftag": false, 00:19:41.701 "prchk_guard": false, 00:19:41.701 "hdgst": false, 00:19:41.701 "ddgst": false, 00:19:41.701 "psk": "/tmp/tmp.B8of0O0mlK", 00:19:41.701 "method": "bdev_nvme_attach_controller", 00:19:41.701 "req_id": 1 00:19:41.701 } 00:19:41.701 Got JSON-RPC error response 00:19:41.701 response: 00:19:41.701 { 00:19:41.702 "code": -1, 00:19:41.702 "message": "Operation not permitted" 00:19:41.702 } 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1748650 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1748650 ']' 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1748650 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748650 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748650' 00:19:41.702 killing process with pid 1748650 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1748650 00:19:41.702 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.702 00:19:41.702 Latency(us) 00:19:41.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.702 =================================================================================================================== 00:19:41.702 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.702 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1748650 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1745825 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1745825 ']' 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1745825 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745825 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745825' 00:19:41.961 killing process with pid 1745825 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1745825 00:19:41.961 [2024-07-15 12:54:12.818092] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:41.961 12:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1745825 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1748900 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1748900 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1748900 ']' 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.221 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.221 [2024-07-15 12:54:13.061007] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:42.221 [2024-07-15 12:54:13.061054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.221 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.221 [2024-07-15 12:54:13.132540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.479 [2024-07-15 12:54:13.208957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.479 [2024-07-15 12:54:13.208993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.479 [2024-07-15 12:54:13.209000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.479 [2024-07-15 12:54:13.209005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.479 [2024-07-15 12:54:13.209010] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.479 [2024-07-15 12:54:13.209027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.B8of0O0mlK 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.B8of0O0mlK 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.B8of0O0mlK 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.B8of0O0mlK 00:19:43.044 12:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.303 [2024-07-15 12:54:14.071081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.303 12:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.563 12:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.563 [2024-07-15 12:54:14.411948] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.563 [2024-07-15 12:54:14.412152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.563 12:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:43.874 malloc0 00:19:43.874 12:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:43.874 12:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:19:44.134 [2024-07-15 12:54:14.949371] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:44.134 [2024-07-15 12:54:14.949400] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:44.134 [2024-07-15 12:54:14.949425] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:44.134 request: 00:19:44.134 { 00:19:44.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.134 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.134 "psk": "/tmp/tmp.B8of0O0mlK", 00:19:44.134 "method": "nvmf_subsystem_add_host", 00:19:44.134 "req_id": 1 00:19:44.134 } 00:19:44.134 Got JSON-RPC error response 00:19:44.134 response: 00:19:44.134 { 00:19:44.134 "code": -32603, 00:19:44.134 "message": "Internal error" 00:19:44.134 } 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1748900 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1748900 ']' 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1748900 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.134 12:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748900 00:19:44.134 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:44.134 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:44.134 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748900' 00:19:44.134 killing process with pid 1748900 00:19:44.134 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1748900 00:19:44.134 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1748900 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.B8of0O0mlK 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1749182 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1749182 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1749182 ']' 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.394 12:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 [2024-07-15 12:54:15.261987] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:44.394 [2024-07-15 12:54:15.262037] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.394 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.394 [2024-07-15 12:54:15.329282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.654 [2024-07-15 12:54:15.407621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.654 [2024-07-15 12:54:15.407657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.654 [2024-07-15 12:54:15.407664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.654 [2024-07-15 12:54:15.407670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.654 [2024-07-15 12:54:15.407674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.654 [2024-07-15 12:54:15.407691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.B8of0O0mlK 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.B8of0O0mlK 00:19:45.223 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.482 [2024-07-15 12:54:16.247135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.482 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.741 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.741 [2024-07-15 12:54:16.604041] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.741 [2024-07-15 12:54:16.604223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.741 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.000 malloc0 00:19:46.000 12:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:19:46.260 [2024-07-15 12:54:17.149674] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1749644 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1749644 /var/tmp/bdevperf.sock 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1749644 ']' 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.260 12:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.260 [2024-07-15 12:54:17.209670] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:46.260 [2024-07-15 12:54:17.209719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749644 ] 00:19:46.520 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.520 [2024-07-15 12:54:17.278745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.520 [2024-07-15 12:54:17.353094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.087 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.087 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:47.087 12:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:19:47.344 [2024-07-15 12:54:18.199837] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.344 [2024-07-15 12:54:18.199913] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:47.344 TLSTESTn1 00:19:47.344 12:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:47.912 12:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:47.912 "subsystems": [ 00:19:47.912 { 00:19:47.912 "subsystem": "keyring", 00:19:47.912 "config": [] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "iobuf", 00:19:47.912 "config": [ 00:19:47.912 { 00:19:47.912 "method": "iobuf_set_options", 00:19:47.912 "params": { 00:19:47.912 "small_pool_count": 8192, 00:19:47.912 "large_pool_count": 1024, 00:19:47.912 "small_bufsize": 8192, 00:19:47.912 "large_bufsize": 135168 00:19:47.912 } 00:19:47.912 } 00:19:47.912 ] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "sock", 00:19:47.912 "config": [ 00:19:47.912 { 00:19:47.912 "method": "sock_set_default_impl", 00:19:47.912 "params": { 00:19:47.912 "impl_name": "posix" 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "sock_impl_set_options", 00:19:47.912 "params": { 00:19:47.912 "impl_name": "ssl", 00:19:47.912 "recv_buf_size": 4096, 00:19:47.912 "send_buf_size": 4096, 00:19:47.912 "enable_recv_pipe": true, 00:19:47.912 "enable_quickack": false, 00:19:47.912 "enable_placement_id": 0, 00:19:47.912 "enable_zerocopy_send_server": true, 00:19:47.912 "enable_zerocopy_send_client": false, 00:19:47.912 "zerocopy_threshold": 0, 00:19:47.912 "tls_version": 0, 00:19:47.912 "enable_ktls": false 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "sock_impl_set_options", 00:19:47.912 "params": { 00:19:47.912 "impl_name": "posix", 00:19:47.912 "recv_buf_size": 2097152, 00:19:47.912 "send_buf_size": 2097152, 00:19:47.912 "enable_recv_pipe": true, 00:19:47.912 "enable_quickack": false, 00:19:47.912 "enable_placement_id": 0, 00:19:47.912 "enable_zerocopy_send_server": true, 00:19:47.912 "enable_zerocopy_send_client": false, 00:19:47.912 "zerocopy_threshold": 0, 00:19:47.912 "tls_version": 0, 00:19:47.912 "enable_ktls": false 00:19:47.912 } 00:19:47.912 } 00:19:47.912 ] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "vmd", 00:19:47.912 "config": [] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "accel", 00:19:47.912 "config": [ 00:19:47.912 { 00:19:47.912 "method": "accel_set_options", 00:19:47.912 "params": { 00:19:47.912 "small_cache_size": 128, 00:19:47.912 "large_cache_size": 16, 00:19:47.912 "task_count": 2048, 00:19:47.912 "sequence_count": 2048, 00:19:47.912 "buf_count": 2048 00:19:47.912 } 00:19:47.912 } 00:19:47.912 ] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "bdev", 00:19:47.912 "config": [ 00:19:47.912 { 00:19:47.912 "method": "bdev_set_options", 00:19:47.912 "params": { 00:19:47.912 "bdev_io_pool_size": 65535, 00:19:47.912 "bdev_io_cache_size": 256, 00:19:47.912 "bdev_auto_examine": true, 00:19:47.912 "iobuf_small_cache_size": 128, 00:19:47.912 "iobuf_large_cache_size": 16 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "bdev_raid_set_options", 00:19:47.912 "params": { 00:19:47.912 "process_window_size_kb": 1024 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "bdev_iscsi_set_options", 00:19:47.912 "params": { 00:19:47.912 "timeout_sec": 30 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "bdev_nvme_set_options", 00:19:47.912 "params": { 00:19:47.912 "action_on_timeout": "none", 00:19:47.912 "timeout_us": 0, 00:19:47.912 "timeout_admin_us": 0, 00:19:47.912 "keep_alive_timeout_ms": 10000, 00:19:47.912 "arbitration_burst": 0, 00:19:47.912 "low_priority_weight": 0, 00:19:47.912 "medium_priority_weight": 0, 00:19:47.912 "high_priority_weight": 0, 00:19:47.912 "nvme_adminq_poll_period_us": 10000, 00:19:47.912 "nvme_ioq_poll_period_us": 0, 00:19:47.912 "io_queue_requests": 0, 00:19:47.912 "delay_cmd_submit": true, 00:19:47.912 "transport_retry_count": 4, 00:19:47.912 "bdev_retry_count": 3, 00:19:47.912 "transport_ack_timeout": 0, 00:19:47.912 "ctrlr_loss_timeout_sec": 0, 00:19:47.912 "reconnect_delay_sec": 0, 00:19:47.912 "fast_io_fail_timeout_sec": 0, 00:19:47.912 "disable_auto_failback": false, 00:19:47.912 "generate_uuids": false, 00:19:47.912 "transport_tos": 0, 00:19:47.912 "nvme_error_stat": false, 00:19:47.912 "rdma_srq_size": 0, 00:19:47.912 "io_path_stat": false, 00:19:47.912 "allow_accel_sequence": false, 00:19:47.912 "rdma_max_cq_size": 0, 00:19:47.912 "rdma_cm_event_timeout_ms": 0, 00:19:47.912 "dhchap_digests": [ 00:19:47.912 "sha256", 00:19:47.912 "sha384", 00:19:47.912 "sha512" 00:19:47.912 ], 00:19:47.912 "dhchap_dhgroups": [ 00:19:47.912 "null", 00:19:47.912 "ffdhe2048", 00:19:47.912 "ffdhe3072", 00:19:47.912 "ffdhe4096", 00:19:47.912 "ffdhe6144", 00:19:47.912 "ffdhe8192" 00:19:47.912 ] 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "bdev_nvme_set_hotplug", 00:19:47.912 "params": { 00:19:47.912 "period_us": 100000, 00:19:47.912 "enable": false 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "bdev_malloc_create", 00:19:47.912 "params": { 00:19:47.912 "name": "malloc0", 00:19:47.912 "num_blocks": 8192, 00:19:47.912 "block_size": 4096, 00:19:47.912 "physical_block_size": 4096, 00:19:47.912 "uuid": "b95aa840-0b10-4657-96e8-feda980049e7", 00:19:47.912 "optimal_io_boundary": 0 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "bdev_wait_for_examine" 00:19:47.912 } 00:19:47.912 ] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "nbd", 00:19:47.912 "config": [] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "scheduler", 00:19:47.912 "config": [ 00:19:47.912 { 00:19:47.912 "method": "framework_set_scheduler", 00:19:47.912 "params": { 00:19:47.912 "name": "static" 00:19:47.912 } 00:19:47.912 } 00:19:47.912 ] 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "subsystem": "nvmf", 00:19:47.912 "config": [ 00:19:47.912 { 00:19:47.912 "method": "nvmf_set_config", 00:19:47.912 "params": { 00:19:47.912 "discovery_filter": "match_any", 00:19:47.912 "admin_cmd_passthru": { 00:19:47.912 "identify_ctrlr": false 00:19:47.912 } 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "nvmf_set_max_subsystems", 00:19:47.912 "params": { 00:19:47.912 "max_subsystems": 1024 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "nvmf_set_crdt", 00:19:47.912 "params": { 00:19:47.912 "crdt1": 0, 00:19:47.912 "crdt2": 0, 00:19:47.912 "crdt3": 0 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "nvmf_create_transport", 00:19:47.912 "params": { 00:19:47.912 "trtype": "TCP", 00:19:47.912 "max_queue_depth": 128, 00:19:47.912 "max_io_qpairs_per_ctrlr": 127, 00:19:47.912 "in_capsule_data_size": 4096, 00:19:47.912 "max_io_size": 131072, 00:19:47.912 "io_unit_size": 131072, 00:19:47.912 "max_aq_depth": 128, 00:19:47.912 "num_shared_buffers": 511, 00:19:47.912 "buf_cache_size": 4294967295, 00:19:47.912 "dif_insert_or_strip": false, 00:19:47.912 "zcopy": false, 00:19:47.912 "c2h_success": false, 00:19:47.912 "sock_priority": 0, 00:19:47.912 "abort_timeout_sec": 1, 00:19:47.912 "ack_timeout": 0, 00:19:47.912 "data_wr_pool_size": 0 00:19:47.912 } 00:19:47.912 }, 00:19:47.912 { 00:19:47.912 "method": "nvmf_create_subsystem", 00:19:47.912 "params": { 00:19:47.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.912 "allow_any_host": false, 00:19:47.912 "serial_number": "SPDK00000000000001", 00:19:47.912 "model_number": "SPDK bdev Controller", 00:19:47.913 "max_namespaces": 10, 00:19:47.913 "min_cntlid": 1, 00:19:47.913 "max_cntlid": 65519, 00:19:47.913 "ana_reporting": false 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "nvmf_subsystem_add_host", 00:19:47.913 "params": { 00:19:47.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.913 "host": "nqn.2016-06.io.spdk:host1", 00:19:47.913 "psk": "/tmp/tmp.B8of0O0mlK" 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "nvmf_subsystem_add_ns", 00:19:47.913 "params": { 00:19:47.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.913 "namespace": { 00:19:47.913 "nsid": 1, 00:19:47.913 "bdev_name": "malloc0", 00:19:47.913 "nguid": "B95AA8400B10465796E8FEDA980049E7", 00:19:47.913 "uuid": "b95aa840-0b10-4657-96e8-feda980049e7", 00:19:47.913 "no_auto_visible": false 00:19:47.913 } 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "nvmf_subsystem_add_listener", 00:19:47.913 "params": { 00:19:47.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.913 "listen_address": { 00:19:47.913 "trtype": "TCP", 00:19:47.913 "adrfam": "IPv4", 00:19:47.913 "traddr": "10.0.0.2", 00:19:47.913 "trsvcid": "4420" 00:19:47.913 }, 00:19:47.913 "secure_channel": true 00:19:47.913 } 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 }' 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:47.913 "subsystems": [ 00:19:47.913 { 00:19:47.913 "subsystem": "keyring", 00:19:47.913 "config": [] 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "subsystem": "iobuf", 00:19:47.913 "config": [ 00:19:47.913 { 00:19:47.913 "method": "iobuf_set_options", 00:19:47.913 "params": { 00:19:47.913 "small_pool_count": 8192, 00:19:47.913 "large_pool_count": 1024, 00:19:47.913 "small_bufsize": 8192, 00:19:47.913 "large_bufsize": 135168 00:19:47.913 } 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "subsystem": "sock", 00:19:47.913 "config": [ 00:19:47.913 { 00:19:47.913 "method": "sock_set_default_impl", 00:19:47.913 "params": { 00:19:47.913 "impl_name": "posix" 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "sock_impl_set_options", 00:19:47.913 "params": { 00:19:47.913 "impl_name": "ssl", 00:19:47.913 "recv_buf_size": 4096, 00:19:47.913 "send_buf_size": 4096, 00:19:47.913 "enable_recv_pipe": true, 00:19:47.913 "enable_quickack": false, 00:19:47.913 "enable_placement_id": 0, 00:19:47.913 "enable_zerocopy_send_server": true, 00:19:47.913 "enable_zerocopy_send_client": false, 00:19:47.913 "zerocopy_threshold": 0, 00:19:47.913 "tls_version": 0, 00:19:47.913 "enable_ktls": false 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "sock_impl_set_options", 00:19:47.913 "params": { 00:19:47.913 "impl_name": "posix", 00:19:47.913 "recv_buf_size": 2097152, 00:19:47.913 "send_buf_size": 2097152, 00:19:47.913 "enable_recv_pipe": true, 00:19:47.913 "enable_quickack": false, 00:19:47.913 "enable_placement_id": 0, 00:19:47.913 "enable_zerocopy_send_server": true, 00:19:47.913 "enable_zerocopy_send_client": false, 00:19:47.913 "zerocopy_threshold": 0, 00:19:47.913 "tls_version": 0, 00:19:47.913 "enable_ktls": false 00:19:47.913 } 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "subsystem": "vmd", 00:19:47.913 "config": [] 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "subsystem": "accel", 00:19:47.913 "config": [ 00:19:47.913 { 00:19:47.913 "method": "accel_set_options", 00:19:47.913 "params": { 00:19:47.913 "small_cache_size": 128, 00:19:47.913 "large_cache_size": 16, 00:19:47.913 "task_count": 2048, 00:19:47.913 "sequence_count": 2048, 00:19:47.913 "buf_count": 2048 00:19:47.913 } 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "subsystem": "bdev", 00:19:47.913 "config": [ 00:19:47.913 { 00:19:47.913 "method": "bdev_set_options", 00:19:47.913 "params": { 00:19:47.913 "bdev_io_pool_size": 65535, 00:19:47.913 "bdev_io_cache_size": 256, 00:19:47.913 "bdev_auto_examine": true, 00:19:47.913 "iobuf_small_cache_size": 128, 00:19:47.913 "iobuf_large_cache_size": 16 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "bdev_raid_set_options", 00:19:47.913 "params": { 00:19:47.913 "process_window_size_kb": 1024 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "bdev_iscsi_set_options", 00:19:47.913 "params": { 00:19:47.913 "timeout_sec": 30 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "bdev_nvme_set_options", 00:19:47.913 "params": { 00:19:47.913 "action_on_timeout": "none", 00:19:47.913 "timeout_us": 0, 00:19:47.913 "timeout_admin_us": 0, 00:19:47.913 "keep_alive_timeout_ms": 10000, 00:19:47.913 "arbitration_burst": 0, 00:19:47.913 "low_priority_weight": 0, 00:19:47.913 "medium_priority_weight": 0, 00:19:47.913 "high_priority_weight": 0, 00:19:47.913 "nvme_adminq_poll_period_us": 10000, 00:19:47.913 "nvme_ioq_poll_period_us": 0, 00:19:47.913 "io_queue_requests": 512, 00:19:47.913 "delay_cmd_submit": true, 00:19:47.913 "transport_retry_count": 4, 00:19:47.913 "bdev_retry_count": 3, 00:19:47.913 "transport_ack_timeout": 0, 00:19:47.913 "ctrlr_loss_timeout_sec": 0, 00:19:47.913 "reconnect_delay_sec": 0, 00:19:47.913 "fast_io_fail_timeout_sec": 0, 00:19:47.913 "disable_auto_failback": false, 00:19:47.913 "generate_uuids": false, 00:19:47.913 "transport_tos": 0, 00:19:47.913 "nvme_error_stat": false, 00:19:47.913 "rdma_srq_size": 0, 00:19:47.913 "io_path_stat": false, 00:19:47.913 "allow_accel_sequence": false, 00:19:47.913 "rdma_max_cq_size": 0, 00:19:47.913 "rdma_cm_event_timeout_ms": 0, 00:19:47.913 "dhchap_digests": [ 00:19:47.913 "sha256", 00:19:47.913 "sha384", 00:19:47.913 "sha512" 00:19:47.913 ], 00:19:47.913 "dhchap_dhgroups": [ 00:19:47.913 "null", 00:19:47.913 "ffdhe2048", 00:19:47.913 "ffdhe3072", 00:19:47.913 "ffdhe4096", 00:19:47.913 "ffdhe6144", 00:19:47.913 "ffdhe8192" 00:19:47.913 ] 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "bdev_nvme_attach_controller", 00:19:47.913 "params": { 00:19:47.913 "name": "TLSTEST", 00:19:47.913 "trtype": "TCP", 00:19:47.913 "adrfam": "IPv4", 00:19:47.913 "traddr": "10.0.0.2", 00:19:47.913 "trsvcid": "4420", 00:19:47.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.913 "prchk_reftag": false, 00:19:47.913 "prchk_guard": false, 00:19:47.913 "ctrlr_loss_timeout_sec": 0, 00:19:47.913 "reconnect_delay_sec": 0, 00:19:47.913 "fast_io_fail_timeout_sec": 0, 00:19:47.913 "psk": "/tmp/tmp.B8of0O0mlK", 00:19:47.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.913 "hdgst": false, 00:19:47.913 "ddgst": false 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "bdev_nvme_set_hotplug", 00:19:47.913 "params": { 00:19:47.913 "period_us": 100000, 00:19:47.913 "enable": false 00:19:47.913 } 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "method": "bdev_wait_for_examine" 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 }, 00:19:47.913 { 00:19:47.913 "subsystem": "nbd", 00:19:47.913 "config": [] 00:19:47.913 } 00:19:47.913 ] 00:19:47.913 }' 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1749644 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1749644 ']' 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1749644 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749644 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749644' 00:19:47.913 killing process with pid 1749644 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1749644 00:19:47.913 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.913 00:19:47.913 Latency(us) 00:19:47.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.913 =================================================================================================================== 00:19:47.913 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.913 [2024-07-15 12:54:18.854519] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:47.913 12:54:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1749644 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1749182 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1749182 ']' 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1749182 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749182 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749182' 00:19:48.172 killing process with pid 1749182 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1749182 00:19:48.172 [2024-07-15 12:54:19.077488] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:48.172 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1749182 00:19:48.431 12:54:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:48.431 12:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.431 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.431 12:54:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:48.431 "subsystems": [ 00:19:48.431 { 00:19:48.431 "subsystem": "keyring", 00:19:48.431 "config": [] 00:19:48.431 }, 00:19:48.431 { 00:19:48.431 "subsystem": "iobuf", 00:19:48.431 "config": [ 00:19:48.431 { 00:19:48.431 "method": "iobuf_set_options", 00:19:48.431 "params": { 00:19:48.431 "small_pool_count": 8192, 00:19:48.432 "large_pool_count": 1024, 00:19:48.432 "small_bufsize": 8192, 00:19:48.432 "large_bufsize": 135168 00:19:48.432 } 00:19:48.432 } 00:19:48.432 ] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "sock", 00:19:48.432 "config": [ 00:19:48.432 { 00:19:48.432 "method": "sock_set_default_impl", 00:19:48.432 "params": { 00:19:48.432 "impl_name": "posix" 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "sock_impl_set_options", 00:19:48.432 "params": { 00:19:48.432 "impl_name": "ssl", 00:19:48.432 "recv_buf_size": 4096, 00:19:48.432 "send_buf_size": 4096, 00:19:48.432 "enable_recv_pipe": true, 00:19:48.432 "enable_quickack": false, 00:19:48.432 "enable_placement_id": 0, 00:19:48.432 "enable_zerocopy_send_server": true, 00:19:48.432 "enable_zerocopy_send_client": false, 00:19:48.432 "zerocopy_threshold": 0, 00:19:48.432 "tls_version": 0, 00:19:48.432 "enable_ktls": false 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "sock_impl_set_options", 00:19:48.432 "params": { 00:19:48.432 "impl_name": "posix", 00:19:48.432 "recv_buf_size": 2097152, 00:19:48.432 "send_buf_size": 2097152, 00:19:48.432 "enable_recv_pipe": true, 00:19:48.432 "enable_quickack": false, 00:19:48.432 "enable_placement_id": 0, 00:19:48.432 "enable_zerocopy_send_server": true, 00:19:48.432 "enable_zerocopy_send_client": false, 00:19:48.432 "zerocopy_threshold": 0, 00:19:48.432 "tls_version": 0, 00:19:48.432 "enable_ktls": false 00:19:48.432 } 00:19:48.432 } 00:19:48.432 ] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "vmd", 00:19:48.432 "config": [] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "accel", 00:19:48.432 "config": [ 00:19:48.432 { 00:19:48.432 "method": "accel_set_options", 00:19:48.432 "params": { 00:19:48.432 "small_cache_size": 128, 00:19:48.432 "large_cache_size": 16, 00:19:48.432 "task_count": 2048, 00:19:48.432 "sequence_count": 2048, 00:19:48.432 "buf_count": 2048 00:19:48.432 } 00:19:48.432 } 00:19:48.432 ] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "bdev", 00:19:48.432 "config": [ 00:19:48.432 { 00:19:48.432 "method": "bdev_set_options", 00:19:48.432 "params": { 00:19:48.432 "bdev_io_pool_size": 65535, 00:19:48.432 "bdev_io_cache_size": 256, 00:19:48.432 "bdev_auto_examine": true, 00:19:48.432 "iobuf_small_cache_size": 128, 00:19:48.432 "iobuf_large_cache_size": 16 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "bdev_raid_set_options", 00:19:48.432 "params": { 00:19:48.432 "process_window_size_kb": 1024 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "bdev_iscsi_set_options", 00:19:48.432 "params": { 00:19:48.432 "timeout_sec": 30 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "bdev_nvme_set_options", 00:19:48.432 "params": { 00:19:48.432 "action_on_timeout": "none", 00:19:48.432 "timeout_us": 0, 00:19:48.432 "timeout_admin_us": 0, 00:19:48.432 "keep_alive_timeout_ms": 10000, 00:19:48.432 "arbitration_burst": 0, 00:19:48.432 "low_priority_weight": 0, 00:19:48.432 "medium_priority_weight": 0, 00:19:48.432 "high_priority_weight": 0, 00:19:48.432 "nvme_adminq_poll_period_us": 10000, 00:19:48.432 "nvme_ioq_poll_period_us": 0, 00:19:48.432 "io_queue_requests": 0, 00:19:48.432 "delay_cmd_submit": true, 00:19:48.432 "transport_retry_count": 4, 00:19:48.432 "bdev_retry_count": 3, 00:19:48.432 "transport_ack_timeout": 0, 00:19:48.432 "ctrlr_loss_timeout_sec": 0, 00:19:48.432 "reconnect_delay_sec": 0, 00:19:48.432 "fast_io_fail_timeout_sec": 0, 00:19:48.432 "disable_auto_failback": false, 00:19:48.432 "generate_uuids": false, 00:19:48.432 "transport_tos": 0, 00:19:48.432 "nvme_error_stat": false, 00:19:48.432 "rdma_srq_size": 0, 00:19:48.432 "io_path_stat": false, 00:19:48.432 "allow_accel_sequence": false, 00:19:48.432 "rdma_max_cq_size": 0, 00:19:48.432 "rdma_cm_event_timeout_ms": 0, 00:19:48.432 "dhchap_digests": [ 00:19:48.432 "sha256", 00:19:48.432 "sha384", 00:19:48.432 "sha512" 00:19:48.432 ], 00:19:48.432 "dhchap_dhgroups": [ 00:19:48.432 "null", 00:19:48.432 "ffdhe2048", 00:19:48.432 "ffdhe3072", 00:19:48.432 "ffdhe4096", 00:19:48.432 "ffdhe6144", 00:19:48.432 "ffdhe8192" 00:19:48.432 ] 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "bdev_nvme_set_hotplug", 00:19:48.432 "params": { 00:19:48.432 "period_us": 100000, 00:19:48.432 "enable": false 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "bdev_malloc_create", 00:19:48.432 "params": { 00:19:48.432 "name": "malloc0", 00:19:48.432 "num_blocks": 8192, 00:19:48.432 "block_size": 4096, 00:19:48.432 "physical_block_size": 4096, 00:19:48.432 "uuid": "b95aa840-0b10-4657-96e8-feda980049e7", 00:19:48.432 "optimal_io_boundary": 0 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "bdev_wait_for_examine" 00:19:48.432 } 00:19:48.432 ] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "nbd", 00:19:48.432 "config": [] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "scheduler", 00:19:48.432 "config": [ 00:19:48.432 { 00:19:48.432 "method": "framework_set_scheduler", 00:19:48.432 "params": { 00:19:48.432 "name": "static" 00:19:48.432 } 00:19:48.432 } 00:19:48.432 ] 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "subsystem": "nvmf", 00:19:48.432 "config": [ 00:19:48.432 { 00:19:48.432 "method": "nvmf_set_config", 00:19:48.432 "params": { 00:19:48.432 "discovery_filter": "match_any", 00:19:48.432 "admin_cmd_passthru": { 00:19:48.432 "identify_ctrlr": false 00:19:48.432 } 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "nvmf_set_max_subsystems", 00:19:48.432 "params": { 00:19:48.432 "max_subsystems": 1024 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "nvmf_set_crdt", 00:19:48.432 "params": { 00:19:48.432 "crdt1": 0, 00:19:48.432 "crdt2": 0, 00:19:48.432 "crdt3": 0 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "nvmf_create_transport", 00:19:48.432 "params": { 00:19:48.432 "trtype": "TCP", 00:19:48.432 "max_queue_depth": 128, 00:19:48.432 "max_io_qpairs_per_ctrlr": 127, 00:19:48.432 "in_capsule_data_size": 4096, 00:19:48.432 "max_io_size": 131072, 00:19:48.432 "io_unit_size": 131072, 00:19:48.432 "max_aq_depth": 128, 00:19:48.432 "num_shared_buffers": 511, 00:19:48.432 "buf_cache_size": 4294967295, 00:19:48.432 "dif_insert_or_strip": false, 00:19:48.432 "zcopy": false, 00:19:48.432 "c2h_success": false, 00:19:48.432 "sock_priority": 0, 00:19:48.432 "abort_timeout_sec": 1, 00:19:48.432 "ack_timeout": 0, 00:19:48.432 "data_wr_pool_size": 0 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "nvmf_create_subsystem", 00:19:48.432 "params": { 00:19:48.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.432 "allow_any_host": false, 00:19:48.432 "serial_number": "SPDK00000000000001", 00:19:48.432 "model_number": "SPDK bdev Controller", 00:19:48.432 "max_namespaces": 10, 00:19:48.432 "min_cntlid": 1, 00:19:48.432 "max_cntlid": 65519, 00:19:48.432 "ana_reporting": false 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "nvmf_subsystem_add_host", 00:19:48.432 "params": { 00:19:48.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.432 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.432 "psk": "/tmp/tmp.B8of0O0mlK" 00:19:48.432 } 00:19:48.432 }, 00:19:48.432 { 00:19:48.432 "method": "nvmf_subsystem_add_ns", 00:19:48.432 "params": { 00:19:48.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.432 "namespace": { 00:19:48.432 "nsid": 1, 00:19:48.432 "bdev_name": "malloc0", 00:19:48.432 "nguid": "B95AA8400B10465796E8FEDA980049E7", 00:19:48.433 "uuid": "b95aa840-0b10-4657-96e8-feda980049e7", 00:19:48.433 "no_auto_visible": false 00:19:48.433 } 00:19:48.433 } 00:19:48.433 }, 00:19:48.433 { 00:19:48.433 "method": "nvmf_subsystem_add_listener", 00:19:48.433 "params": { 00:19:48.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.433 "listen_address": { 00:19:48.433 "trtype": "TCP", 00:19:48.433 "adrfam": "IPv4", 00:19:48.433 "traddr": "10.0.0.2", 00:19:48.433 "trsvcid": "4420" 00:19:48.433 }, 00:19:48.433 "secure_channel": true 00:19:48.433 } 00:19:48.433 } 00:19:48.433 ] 00:19:48.433 } 00:19:48.433 ] 00:19:48.433 }' 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1749900 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1749900 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1749900 ']' 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.433 12:54:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.433 [2024-07-15 12:54:19.324042] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:48.433 [2024-07-15 12:54:19.324089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.433 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.692 [2024-07-15 12:54:19.394308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.692 [2024-07-15 12:54:19.469012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.692 [2024-07-15 12:54:19.469049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.692 [2024-07-15 12:54:19.469056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.692 [2024-07-15 12:54:19.469062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.692 [2024-07-15 12:54:19.469067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.692 [2024-07-15 12:54:19.469116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.951 [2024-07-15 12:54:19.672707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.951 [2024-07-15 12:54:19.688688] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:48.951 [2024-07-15 12:54:19.704737] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.951 [2024-07-15 12:54:19.719545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.210 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.210 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:49.210 12:54:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.210 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.210 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1750148 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1750148 /var/tmp/bdevperf.sock 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1750148 ']' 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.469 12:54:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:49.469 "subsystems": [ 00:19:49.469 { 00:19:49.469 "subsystem": "keyring", 00:19:49.469 "config": [] 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "subsystem": "iobuf", 00:19:49.469 "config": [ 00:19:49.469 { 00:19:49.469 "method": "iobuf_set_options", 00:19:49.469 "params": { 00:19:49.469 "small_pool_count": 8192, 00:19:49.469 "large_pool_count": 1024, 00:19:49.469 "small_bufsize": 8192, 00:19:49.469 "large_bufsize": 135168 00:19:49.469 } 00:19:49.469 } 00:19:49.469 ] 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "subsystem": "sock", 00:19:49.469 "config": [ 00:19:49.469 { 00:19:49.469 "method": "sock_set_default_impl", 00:19:49.469 "params": { 00:19:49.469 "impl_name": "posix" 00:19:49.469 } 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "method": "sock_impl_set_options", 00:19:49.469 "params": { 00:19:49.469 "impl_name": "ssl", 00:19:49.469 "recv_buf_size": 4096, 00:19:49.469 "send_buf_size": 4096, 00:19:49.469 "enable_recv_pipe": true, 00:19:49.469 "enable_quickack": false, 00:19:49.469 "enable_placement_id": 0, 00:19:49.469 "enable_zerocopy_send_server": true, 00:19:49.469 "enable_zerocopy_send_client": false, 00:19:49.469 "zerocopy_threshold": 0, 00:19:49.469 "tls_version": 0, 00:19:49.469 "enable_ktls": false 00:19:49.469 } 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "method": "sock_impl_set_options", 00:19:49.469 "params": { 00:19:49.469 "impl_name": "posix", 00:19:49.469 "recv_buf_size": 2097152, 00:19:49.469 "send_buf_size": 2097152, 00:19:49.469 "enable_recv_pipe": true, 00:19:49.469 "enable_quickack": false, 00:19:49.469 "enable_placement_id": 0, 00:19:49.469 "enable_zerocopy_send_server": true, 00:19:49.469 "enable_zerocopy_send_client": false, 00:19:49.469 "zerocopy_threshold": 0, 00:19:49.469 "tls_version": 0, 00:19:49.469 "enable_ktls": false 00:19:49.469 } 00:19:49.469 } 00:19:49.469 ] 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "subsystem": "vmd", 00:19:49.469 "config": [] 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "subsystem": "accel", 00:19:49.469 "config": [ 00:19:49.469 { 00:19:49.469 "method": "accel_set_options", 00:19:49.469 "params": { 00:19:49.469 "small_cache_size": 128, 00:19:49.469 "large_cache_size": 16, 00:19:49.469 "task_count": 2048, 00:19:49.469 "sequence_count": 2048, 00:19:49.469 "buf_count": 2048 00:19:49.469 } 00:19:49.469 } 00:19:49.469 ] 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "subsystem": "bdev", 00:19:49.469 "config": [ 00:19:49.469 { 00:19:49.469 "method": "bdev_set_options", 00:19:49.469 "params": { 00:19:49.469 "bdev_io_pool_size": 65535, 00:19:49.469 "bdev_io_cache_size": 256, 00:19:49.469 "bdev_auto_examine": true, 00:19:49.469 "iobuf_small_cache_size": 128, 00:19:49.469 "iobuf_large_cache_size": 16 00:19:49.469 } 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "method": "bdev_raid_set_options", 00:19:49.469 "params": { 00:19:49.469 "process_window_size_kb": 1024 00:19:49.469 } 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "method": "bdev_iscsi_set_options", 00:19:49.469 "params": { 00:19:49.469 "timeout_sec": 30 00:19:49.469 } 00:19:49.469 }, 00:19:49.469 { 00:19:49.469 "method": "bdev_nvme_set_options", 00:19:49.469 "params": { 00:19:49.469 "action_on_timeout": "none", 00:19:49.469 "timeout_us": 0, 00:19:49.469 "timeout_admin_us": 0, 00:19:49.469 "keep_alive_timeout_ms": 10000, 00:19:49.469 "arbitration_burst": 0, 00:19:49.469 "low_priority_weight": 0, 00:19:49.469 "medium_priority_weight": 0, 00:19:49.469 "high_priority_weight": 0, 00:19:49.469 "nvme_adminq_poll_period_us": 10000, 00:19:49.469 "nvme_ioq_poll_period_us": 0, 00:19:49.469 "io_queue_requests": 512, 00:19:49.469 "delay_cmd_submit": true, 00:19:49.469 "transport_retry_count": 4, 00:19:49.469 "bdev_retry_count": 3, 00:19:49.469 "transport_ack_timeout": 0, 00:19:49.469 "ctrlr_loss_timeout_sec": 0, 00:19:49.469 "reconnect_delay_sec": 0, 00:19:49.469 "fast_io_fail_timeout_sec": 0, 00:19:49.469 "disable_auto_failback": false, 00:19:49.469 "generate_uuids": false, 00:19:49.469 "transport_tos": 0, 00:19:49.469 "nvme_error_stat": false, 00:19:49.469 "rdma_srq_size": 0, 00:19:49.469 "io_path_stat": false, 00:19:49.469 "allow_accel_sequence": false, 00:19:49.469 "rdma_max_cq_size": 0, 00:19:49.470 "rdma_cm_event_timeout_ms": 0, 00:19:49.470 "dhchap_digests": [ 00:19:49.470 "sha256", 00:19:49.470 "sha384", 00:19:49.470 "sha512" 00:19:49.470 ], 00:19:49.470 "dhchap_dhgroups": [ 00:19:49.470 "null", 00:19:49.470 "ffdhe2048", 00:19:49.470 "ffdhe3072", 00:19:49.470 "ffdhe4096", 00:19:49.470 "ffdhe6144", 00:19:49.470 "ffdhe8192" 00:19:49.470 ] 00:19:49.470 } 00:19:49.470 }, 00:19:49.470 { 00:19:49.470 "method": "bdev_nvme_attach_controller", 00:19:49.470 "params": { 00:19:49.470 "name": "TLSTEST", 00:19:49.470 "trtype": "TCP", 00:19:49.470 "adrfam": "IPv4", 00:19:49.470 "traddr": "10.0.0.2", 00:19:49.470 "trsvcid": "4420", 00:19:49.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.470 "prchk_reftag": false, 00:19:49.470 "prchk_guard": false, 00:19:49.470 "ctrlr_loss_timeout_sec": 0, 00:19:49.470 "reconnect_delay_sec": 0, 00:19:49.470 "fast_io_fail_timeout_sec": 0, 00:19:49.470 "psk": "/tmp/tmp.B8of0O0mlK", 00:19:49.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.470 "hdgst": false, 00:19:49.470 "ddgst": false 00:19:49.470 } 00:19:49.470 }, 00:19:49.470 { 00:19:49.470 "method": "bdev_nvme_set_hotplug", 00:19:49.470 "params": { 00:19:49.470 "period_us": 100000, 00:19:49.470 "enable": false 00:19:49.470 } 00:19:49.470 }, 00:19:49.470 { 00:19:49.470 "method": "bdev_wait_for_examine" 00:19:49.470 } 00:19:49.470 ] 00:19:49.470 }, 00:19:49.470 { 00:19:49.470 "subsystem": "nbd", 00:19:49.470 "config": [] 00:19:49.470 } 00:19:49.470 ] 00:19:49.470 }' 00:19:49.470 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.470 12:54:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.470 [2024-07-15 12:54:20.224929] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:49.470 [2024-07-15 12:54:20.224979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750148 ] 00:19:49.470 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.470 [2024-07-15 12:54:20.292207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.470 [2024-07-15 12:54:20.365590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.729 [2024-07-15 12:54:20.507679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.729 [2024-07-15 12:54:20.507757] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.296 12:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.296 12:54:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.296 12:54:21 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:50.296 Running I/O for 10 seconds... 00:20:00.265 00:20:00.265 Latency(us) 00:20:00.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.265 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:00.265 Verification LBA range: start 0x0 length 0x2000 00:20:00.265 TLSTESTn1 : 10.01 5517.61 21.55 0.00 0.00 23162.31 6240.17 34876.55 00:20:00.265 =================================================================================================================== 00:20:00.265 Total : 5517.61 21.55 0.00 0.00 23162.31 6240.17 34876.55 00:20:00.265 0 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1750148 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1750148 ']' 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1750148 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.265 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1750148 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1750148' 00:20:00.525 killing process with pid 1750148 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1750148 00:20:00.525 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.525 00:20:00.525 Latency(us) 00:20:00.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.525 =================================================================================================================== 00:20:00.525 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.525 [2024-07-15 12:54:31.224867] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1750148 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1749900 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1749900 ']' 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1749900 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749900 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749900' 00:20:00.525 killing process with pid 1749900 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1749900 00:20:00.525 [2024-07-15 12:54:31.452511] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:00.525 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1749900 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1751990 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1751990 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1751990 ']' 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.797 12:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.797 [2024-07-15 12:54:31.699447] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:00.797 [2024-07-15 12:54:31.699493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.797 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.056 [2024-07-15 12:54:31.765925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.056 [2024-07-15 12:54:31.837064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.056 [2024-07-15 12:54:31.837104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.056 [2024-07-15 12:54:31.837110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.056 [2024-07-15 12:54:31.837116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.056 [2024-07-15 12:54:31.837120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.056 [2024-07-15 12:54:31.837136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.B8of0O0mlK 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.B8of0O0mlK 00:20:01.623 12:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.883 [2024-07-15 12:54:32.687128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.883 12:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.142 12:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:02.142 [2024-07-15 12:54:33.044032] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.142 [2024-07-15 12:54:33.044214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.142 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:02.402 malloc0 00:20:02.402 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.661 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B8of0O0mlK 00:20:02.661 [2024-07-15 12:54:33.597652] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1752251 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1752251 /var/tmp/bdevperf.sock 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1752251 ']' 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.921 12:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.921 [2024-07-15 12:54:33.655815] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:02.921 [2024-07-15 12:54:33.655862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752251 ] 00:20:02.921 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.921 [2024-07-15 12:54:33.723862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.921 [2024-07-15 12:54:33.798712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.856 12:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.856 12:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:03.856 12:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8of0O0mlK 00:20:03.856 12:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.115 [2024-07-15 12:54:34.825728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.115 nvme0n1 00:20:04.115 12:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.115 Running I/O for 1 seconds... 00:20:05.494 00:20:05.494 Latency(us) 00:20:05.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.494 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.494 Verification LBA range: start 0x0 length 0x2000 00:20:05.494 nvme0n1 : 1.02 5614.45 21.93 0.00 0.00 22608.13 4729.99 26556.33 00:20:05.494 =================================================================================================================== 00:20:05.494 Total : 5614.45 21.93 0.00 0.00 22608.13 4729.99 26556.33 00:20:05.494 0 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1752251 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1752251 ']' 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1752251 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752251 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752251' 00:20:05.494 killing process with pid 1752251 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1752251 00:20:05.494 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.494 00:20:05.494 Latency(us) 00:20:05.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.494 =================================================================================================================== 00:20:05.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1752251 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1751990 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1751990 ']' 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1751990 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751990 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751990' 00:20:05.494 killing process with pid 1751990 00:20:05.494 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1751990 00:20:05.495 [2024-07-15 12:54:36.319041] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:05.495 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1751990 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1752723 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1752723 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1752723 ']' 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.754 12:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.754 [2024-07-15 12:54:36.562656] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:05.754 [2024-07-15 12:54:36.562704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.754 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.754 [2024-07-15 12:54:36.630600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.754 [2024-07-15 12:54:36.700824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.754 [2024-07-15 12:54:36.700865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.754 [2024-07-15 12:54:36.700872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.754 [2024-07-15 12:54:36.700878] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.754 [2024-07-15 12:54:36.700882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.754 [2024-07-15 12:54:36.700900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.691 [2024-07-15 12:54:37.423946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.691 malloc0 00:20:06.691 [2024-07-15 12:54:37.452184] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.691 [2024-07-15 12:54:37.452375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1752968 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1752968 /var/tmp/bdevperf.sock 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1752968 ']' 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.691 12:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.691 [2024-07-15 12:54:37.526112] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:06.691 [2024-07-15 12:54:37.526153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752968 ] 00:20:06.691 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.691 [2024-07-15 12:54:37.594881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.950 [2024-07-15 12:54:37.675020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.519 12:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.519 12:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:07.519 12:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.B8of0O0mlK 00:20:07.805 12:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.805 [2024-07-15 12:54:38.662001] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.805 nvme0n1 00:20:07.805 12:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.063 Running I/O for 1 seconds... 00:20:09.000 00:20:09.000 Latency(us) 00:20:09.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.000 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.000 Verification LBA range: start 0x0 length 0x2000 00:20:09.000 nvme0n1 : 1.03 5046.55 19.71 0.00 0.00 25021.05 4673.00 45590.26 00:20:09.000 =================================================================================================================== 00:20:09.000 Total : 5046.55 19.71 0.00 0.00 25021.05 4673.00 45590.26 00:20:09.000 0 00:20:09.000 12:54:39 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:09.000 12:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.000 12:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.258 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.258 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:09.258 "subsystems": [ 00:20:09.258 { 00:20:09.258 "subsystem": "keyring", 00:20:09.258 "config": [ 00:20:09.258 { 00:20:09.258 "method": "keyring_file_add_key", 00:20:09.258 "params": { 00:20:09.258 "name": "key0", 00:20:09.258 "path": "/tmp/tmp.B8of0O0mlK" 00:20:09.258 } 00:20:09.258 } 00:20:09.258 ] 00:20:09.258 }, 00:20:09.258 { 00:20:09.258 "subsystem": "iobuf", 00:20:09.258 "config": [ 00:20:09.258 { 00:20:09.258 "method": "iobuf_set_options", 00:20:09.258 "params": { 00:20:09.258 "small_pool_count": 8192, 00:20:09.258 "large_pool_count": 1024, 00:20:09.258 "small_bufsize": 8192, 00:20:09.258 "large_bufsize": 135168 00:20:09.258 } 00:20:09.258 } 00:20:09.258 ] 00:20:09.258 }, 00:20:09.258 { 00:20:09.258 "subsystem": "sock", 00:20:09.258 "config": [ 00:20:09.258 { 00:20:09.258 "method": "sock_set_default_impl", 00:20:09.258 "params": { 00:20:09.258 "impl_name": "posix" 00:20:09.258 } 00:20:09.258 }, 00:20:09.258 { 00:20:09.258 "method": "sock_impl_set_options", 00:20:09.258 "params": { 00:20:09.258 "impl_name": "ssl", 00:20:09.258 "recv_buf_size": 4096, 00:20:09.258 "send_buf_size": 4096, 00:20:09.258 "enable_recv_pipe": true, 00:20:09.258 "enable_quickack": false, 00:20:09.258 "enable_placement_id": 0, 00:20:09.258 "enable_zerocopy_send_server": true, 00:20:09.258 "enable_zerocopy_send_client": false, 00:20:09.258 "zerocopy_threshold": 0, 00:20:09.258 "tls_version": 0, 00:20:09.258 "enable_ktls": false 00:20:09.258 } 00:20:09.258 }, 00:20:09.258 { 00:20:09.258 "method": "sock_impl_set_options", 00:20:09.258 "params": { 00:20:09.258 "impl_name": "posix", 00:20:09.258 "recv_buf_size": 2097152, 00:20:09.259 "send_buf_size": 2097152, 00:20:09.259 "enable_recv_pipe": true, 00:20:09.259 "enable_quickack": false, 00:20:09.259 "enable_placement_id": 0, 00:20:09.259 "enable_zerocopy_send_server": true, 00:20:09.259 "enable_zerocopy_send_client": false, 00:20:09.259 "zerocopy_threshold": 0, 00:20:09.259 "tls_version": 0, 00:20:09.259 "enable_ktls": false 00:20:09.259 } 00:20:09.259 } 00:20:09.259 ] 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "subsystem": "vmd", 00:20:09.259 "config": [] 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "subsystem": "accel", 00:20:09.259 "config": [ 00:20:09.259 { 00:20:09.259 "method": "accel_set_options", 00:20:09.259 "params": { 00:20:09.259 "small_cache_size": 128, 00:20:09.259 "large_cache_size": 16, 00:20:09.259 "task_count": 2048, 00:20:09.259 "sequence_count": 2048, 00:20:09.259 "buf_count": 2048 00:20:09.259 } 00:20:09.259 } 00:20:09.259 ] 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "subsystem": "bdev", 00:20:09.259 "config": [ 00:20:09.259 { 00:20:09.259 "method": "bdev_set_options", 00:20:09.259 "params": { 00:20:09.259 "bdev_io_pool_size": 65535, 00:20:09.259 "bdev_io_cache_size": 256, 00:20:09.259 "bdev_auto_examine": true, 00:20:09.259 "iobuf_small_cache_size": 128, 00:20:09.259 "iobuf_large_cache_size": 16 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "bdev_raid_set_options", 00:20:09.259 "params": { 00:20:09.259 "process_window_size_kb": 1024 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "bdev_iscsi_set_options", 00:20:09.259 "params": { 00:20:09.259 "timeout_sec": 30 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "bdev_nvme_set_options", 00:20:09.259 "params": { 00:20:09.259 "action_on_timeout": "none", 00:20:09.259 "timeout_us": 0, 00:20:09.259 "timeout_admin_us": 0, 00:20:09.259 "keep_alive_timeout_ms": 10000, 00:20:09.259 "arbitration_burst": 0, 00:20:09.259 "low_priority_weight": 0, 00:20:09.259 "medium_priority_weight": 0, 00:20:09.259 "high_priority_weight": 0, 00:20:09.259 "nvme_adminq_poll_period_us": 10000, 00:20:09.259 "nvme_ioq_poll_period_us": 0, 00:20:09.259 "io_queue_requests": 0, 00:20:09.259 "delay_cmd_submit": true, 00:20:09.259 "transport_retry_count": 4, 00:20:09.259 "bdev_retry_count": 3, 00:20:09.259 "transport_ack_timeout": 0, 00:20:09.259 "ctrlr_loss_timeout_sec": 0, 00:20:09.259 "reconnect_delay_sec": 0, 00:20:09.259 "fast_io_fail_timeout_sec": 0, 00:20:09.259 "disable_auto_failback": false, 00:20:09.259 "generate_uuids": false, 00:20:09.259 "transport_tos": 0, 00:20:09.259 "nvme_error_stat": false, 00:20:09.259 "rdma_srq_size": 0, 00:20:09.259 "io_path_stat": false, 00:20:09.259 "allow_accel_sequence": false, 00:20:09.259 "rdma_max_cq_size": 0, 00:20:09.259 "rdma_cm_event_timeout_ms": 0, 00:20:09.259 "dhchap_digests": [ 00:20:09.259 "sha256", 00:20:09.259 "sha384", 00:20:09.259 "sha512" 00:20:09.259 ], 00:20:09.259 "dhchap_dhgroups": [ 00:20:09.259 "null", 00:20:09.259 "ffdhe2048", 00:20:09.259 "ffdhe3072", 00:20:09.259 "ffdhe4096", 00:20:09.259 "ffdhe6144", 00:20:09.259 "ffdhe8192" 00:20:09.259 ] 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "bdev_nvme_set_hotplug", 00:20:09.259 "params": { 00:20:09.259 "period_us": 100000, 00:20:09.259 "enable": false 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "bdev_malloc_create", 00:20:09.259 "params": { 00:20:09.259 "name": "malloc0", 00:20:09.259 "num_blocks": 8192, 00:20:09.259 "block_size": 4096, 00:20:09.259 "physical_block_size": 4096, 00:20:09.259 "uuid": "218ab0ac-7ca5-4d7a-b8d3-33c29ef569f8", 00:20:09.259 "optimal_io_boundary": 0 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "bdev_wait_for_examine" 00:20:09.259 } 00:20:09.259 ] 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "subsystem": "nbd", 00:20:09.259 "config": [] 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "subsystem": "scheduler", 00:20:09.259 "config": [ 00:20:09.259 { 00:20:09.259 "method": "framework_set_scheduler", 00:20:09.259 "params": { 00:20:09.259 "name": "static" 00:20:09.259 } 00:20:09.259 } 00:20:09.259 ] 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "subsystem": "nvmf", 00:20:09.259 "config": [ 00:20:09.259 { 00:20:09.259 "method": "nvmf_set_config", 00:20:09.259 "params": { 00:20:09.259 "discovery_filter": "match_any", 00:20:09.259 "admin_cmd_passthru": { 00:20:09.259 "identify_ctrlr": false 00:20:09.259 } 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_set_max_subsystems", 00:20:09.259 "params": { 00:20:09.259 "max_subsystems": 1024 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_set_crdt", 00:20:09.259 "params": { 00:20:09.259 "crdt1": 0, 00:20:09.259 "crdt2": 0, 00:20:09.259 "crdt3": 0 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_create_transport", 00:20:09.259 "params": { 00:20:09.259 "trtype": "TCP", 00:20:09.259 "max_queue_depth": 128, 00:20:09.259 "max_io_qpairs_per_ctrlr": 127, 00:20:09.259 "in_capsule_data_size": 4096, 00:20:09.259 "max_io_size": 131072, 00:20:09.259 "io_unit_size": 131072, 00:20:09.259 "max_aq_depth": 128, 00:20:09.259 "num_shared_buffers": 511, 00:20:09.259 "buf_cache_size": 4294967295, 00:20:09.259 "dif_insert_or_strip": false, 00:20:09.259 "zcopy": false, 00:20:09.259 "c2h_success": false, 00:20:09.259 "sock_priority": 0, 00:20:09.259 "abort_timeout_sec": 1, 00:20:09.259 "ack_timeout": 0, 00:20:09.259 "data_wr_pool_size": 0 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_create_subsystem", 00:20:09.259 "params": { 00:20:09.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.259 "allow_any_host": false, 00:20:09.259 "serial_number": "00000000000000000000", 00:20:09.259 "model_number": "SPDK bdev Controller", 00:20:09.259 "max_namespaces": 32, 00:20:09.259 "min_cntlid": 1, 00:20:09.259 "max_cntlid": 65519, 00:20:09.259 "ana_reporting": false 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_subsystem_add_host", 00:20:09.259 "params": { 00:20:09.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.259 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.259 "psk": "key0" 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_subsystem_add_ns", 00:20:09.259 "params": { 00:20:09.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.259 "namespace": { 00:20:09.259 "nsid": 1, 00:20:09.259 "bdev_name": "malloc0", 00:20:09.259 "nguid": "218AB0AC7CA54D7AB8D333C29EF569F8", 00:20:09.259 "uuid": "218ab0ac-7ca5-4d7a-b8d3-33c29ef569f8", 00:20:09.259 "no_auto_visible": false 00:20:09.259 } 00:20:09.259 } 00:20:09.259 }, 00:20:09.259 { 00:20:09.259 "method": "nvmf_subsystem_add_listener", 00:20:09.259 "params": { 00:20:09.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.259 "listen_address": { 00:20:09.259 "trtype": "TCP", 00:20:09.259 "adrfam": "IPv4", 00:20:09.259 "traddr": "10.0.0.2", 00:20:09.259 "trsvcid": "4420" 00:20:09.259 }, 00:20:09.259 "secure_channel": true 00:20:09.259 } 00:20:09.259 } 00:20:09.259 ] 00:20:09.259 } 00:20:09.259 ] 00:20:09.259 }' 00:20:09.259 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:09.519 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:09.519 "subsystems": [ 00:20:09.519 { 00:20:09.519 "subsystem": "keyring", 00:20:09.519 "config": [ 00:20:09.519 { 00:20:09.519 "method": "keyring_file_add_key", 00:20:09.519 "params": { 00:20:09.519 "name": "key0", 00:20:09.519 "path": "/tmp/tmp.B8of0O0mlK" 00:20:09.519 } 00:20:09.519 } 00:20:09.519 ] 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "subsystem": "iobuf", 00:20:09.519 "config": [ 00:20:09.519 { 00:20:09.519 "method": "iobuf_set_options", 00:20:09.519 "params": { 00:20:09.519 "small_pool_count": 8192, 00:20:09.519 "large_pool_count": 1024, 00:20:09.519 "small_bufsize": 8192, 00:20:09.519 "large_bufsize": 135168 00:20:09.519 } 00:20:09.519 } 00:20:09.519 ] 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "subsystem": "sock", 00:20:09.519 "config": [ 00:20:09.519 { 00:20:09.519 "method": "sock_set_default_impl", 00:20:09.519 "params": { 00:20:09.519 "impl_name": "posix" 00:20:09.519 } 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "method": "sock_impl_set_options", 00:20:09.519 "params": { 00:20:09.519 "impl_name": "ssl", 00:20:09.519 "recv_buf_size": 4096, 00:20:09.519 "send_buf_size": 4096, 00:20:09.519 "enable_recv_pipe": true, 00:20:09.519 "enable_quickack": false, 00:20:09.519 "enable_placement_id": 0, 00:20:09.519 "enable_zerocopy_send_server": true, 00:20:09.519 "enable_zerocopy_send_client": false, 00:20:09.519 "zerocopy_threshold": 0, 00:20:09.519 "tls_version": 0, 00:20:09.519 "enable_ktls": false 00:20:09.519 } 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "method": "sock_impl_set_options", 00:20:09.519 "params": { 00:20:09.519 "impl_name": "posix", 00:20:09.519 "recv_buf_size": 2097152, 00:20:09.519 "send_buf_size": 2097152, 00:20:09.519 "enable_recv_pipe": true, 00:20:09.519 "enable_quickack": false, 00:20:09.519 "enable_placement_id": 0, 00:20:09.519 "enable_zerocopy_send_server": true, 00:20:09.519 "enable_zerocopy_send_client": false, 00:20:09.519 "zerocopy_threshold": 0, 00:20:09.519 "tls_version": 0, 00:20:09.519 "enable_ktls": false 00:20:09.519 } 00:20:09.519 } 00:20:09.519 ] 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "subsystem": "vmd", 00:20:09.519 "config": [] 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "subsystem": "accel", 00:20:09.519 "config": [ 00:20:09.519 { 00:20:09.519 "method": "accel_set_options", 00:20:09.519 "params": { 00:20:09.519 "small_cache_size": 128, 00:20:09.519 "large_cache_size": 16, 00:20:09.519 "task_count": 2048, 00:20:09.519 "sequence_count": 2048, 00:20:09.519 "buf_count": 2048 00:20:09.519 } 00:20:09.519 } 00:20:09.519 ] 00:20:09.519 }, 00:20:09.519 { 00:20:09.519 "subsystem": "bdev", 00:20:09.519 "config": [ 00:20:09.519 { 00:20:09.519 "method": "bdev_set_options", 00:20:09.519 "params": { 00:20:09.519 "bdev_io_pool_size": 65535, 00:20:09.519 "bdev_io_cache_size": 256, 00:20:09.519 "bdev_auto_examine": true, 00:20:09.519 "iobuf_small_cache_size": 128, 00:20:09.519 "iobuf_large_cache_size": 16 00:20:09.519 } 00:20:09.519 }, 00:20:09.519 { 00:20:09.520 "method": "bdev_raid_set_options", 00:20:09.520 "params": { 00:20:09.520 "process_window_size_kb": 1024 00:20:09.520 } 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "method": "bdev_iscsi_set_options", 00:20:09.520 "params": { 00:20:09.520 "timeout_sec": 30 00:20:09.520 } 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "method": "bdev_nvme_set_options", 00:20:09.520 "params": { 00:20:09.520 "action_on_timeout": "none", 00:20:09.520 "timeout_us": 0, 00:20:09.520 "timeout_admin_us": 0, 00:20:09.520 "keep_alive_timeout_ms": 10000, 00:20:09.520 "arbitration_burst": 0, 00:20:09.520 "low_priority_weight": 0, 00:20:09.520 "medium_priority_weight": 0, 00:20:09.520 "high_priority_weight": 0, 00:20:09.520 "nvme_adminq_poll_period_us": 10000, 00:20:09.520 "nvme_ioq_poll_period_us": 0, 00:20:09.520 "io_queue_requests": 512, 00:20:09.520 "delay_cmd_submit": true, 00:20:09.520 "transport_retry_count": 4, 00:20:09.520 "bdev_retry_count": 3, 00:20:09.520 "transport_ack_timeout": 0, 00:20:09.520 "ctrlr_loss_timeout_sec": 0, 00:20:09.520 "reconnect_delay_sec": 0, 00:20:09.520 "fast_io_fail_timeout_sec": 0, 00:20:09.520 "disable_auto_failback": false, 00:20:09.520 "generate_uuids": false, 00:20:09.520 "transport_tos": 0, 00:20:09.520 "nvme_error_stat": false, 00:20:09.520 "rdma_srq_size": 0, 00:20:09.520 "io_path_stat": false, 00:20:09.520 "allow_accel_sequence": false, 00:20:09.520 "rdma_max_cq_size": 0, 00:20:09.520 "rdma_cm_event_timeout_ms": 0, 00:20:09.520 "dhchap_digests": [ 00:20:09.520 "sha256", 00:20:09.520 "sha384", 00:20:09.520 "sha512" 00:20:09.520 ], 00:20:09.520 "dhchap_dhgroups": [ 00:20:09.520 "null", 00:20:09.520 "ffdhe2048", 00:20:09.520 "ffdhe3072", 00:20:09.520 "ffdhe4096", 00:20:09.520 "ffdhe6144", 00:20:09.520 "ffdhe8192" 00:20:09.520 ] 00:20:09.520 } 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "method": "bdev_nvme_attach_controller", 00:20:09.520 "params": { 00:20:09.520 "name": "nvme0", 00:20:09.520 "trtype": "TCP", 00:20:09.520 "adrfam": "IPv4", 00:20:09.520 "traddr": "10.0.0.2", 00:20:09.520 "trsvcid": "4420", 00:20:09.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.520 "prchk_reftag": false, 00:20:09.520 "prchk_guard": false, 00:20:09.520 "ctrlr_loss_timeout_sec": 0, 00:20:09.520 "reconnect_delay_sec": 0, 00:20:09.520 "fast_io_fail_timeout_sec": 0, 00:20:09.520 "psk": "key0", 00:20:09.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.520 "hdgst": false, 00:20:09.520 "ddgst": false 00:20:09.520 } 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "method": "bdev_nvme_set_hotplug", 00:20:09.520 "params": { 00:20:09.520 "period_us": 100000, 00:20:09.520 "enable": false 00:20:09.520 } 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "method": "bdev_enable_histogram", 00:20:09.520 "params": { 00:20:09.520 "name": "nvme0n1", 00:20:09.520 "enable": true 00:20:09.520 } 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "method": "bdev_wait_for_examine" 00:20:09.520 } 00:20:09.520 ] 00:20:09.520 }, 00:20:09.520 { 00:20:09.520 "subsystem": "nbd", 00:20:09.520 "config": [] 00:20:09.520 } 00:20:09.520 ] 00:20:09.520 }' 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1752968 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1752968 ']' 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1752968 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752968 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752968' 00:20:09.520 killing process with pid 1752968 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1752968 00:20:09.520 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.520 00:20:09.520 Latency(us) 00:20:09.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.520 =================================================================================================================== 00:20:09.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.520 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1752968 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1752723 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1752723 ']' 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1752723 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752723 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752723' 00:20:09.780 killing process with pid 1752723 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1752723 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1752723 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.780 12:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:09.780 "subsystems": [ 00:20:09.780 { 00:20:09.780 "subsystem": "keyring", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "keyring_file_add_key", 00:20:09.780 "params": { 00:20:09.780 "name": "key0", 00:20:09.780 "path": "/tmp/tmp.B8of0O0mlK" 00:20:09.780 } 00:20:09.780 } 00:20:09.780 ] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "iobuf", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "iobuf_set_options", 00:20:09.780 "params": { 00:20:09.780 "small_pool_count": 8192, 00:20:09.780 "large_pool_count": 1024, 00:20:09.780 "small_bufsize": 8192, 00:20:09.780 "large_bufsize": 135168 00:20:09.780 } 00:20:09.780 } 00:20:09.780 ] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "sock", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "sock_set_default_impl", 00:20:09.780 "params": { 00:20:09.780 "impl_name": "posix" 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "sock_impl_set_options", 00:20:09.780 "params": { 00:20:09.780 "impl_name": "ssl", 00:20:09.780 "recv_buf_size": 4096, 00:20:09.780 "send_buf_size": 4096, 00:20:09.780 "enable_recv_pipe": true, 00:20:09.780 "enable_quickack": false, 00:20:09.780 "enable_placement_id": 0, 00:20:09.780 "enable_zerocopy_send_server": true, 00:20:09.780 "enable_zerocopy_send_client": false, 00:20:09.780 "zerocopy_threshold": 0, 00:20:09.780 "tls_version": 0, 00:20:09.780 "enable_ktls": false 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "sock_impl_set_options", 00:20:09.780 "params": { 00:20:09.780 "impl_name": "posix", 00:20:09.780 "recv_buf_size": 2097152, 00:20:09.780 "send_buf_size": 2097152, 00:20:09.780 "enable_recv_pipe": true, 00:20:09.780 "enable_quickack": false, 00:20:09.780 "enable_placement_id": 0, 00:20:09.780 "enable_zerocopy_send_server": true, 00:20:09.780 "enable_zerocopy_send_client": false, 00:20:09.780 "zerocopy_threshold": 0, 00:20:09.780 "tls_version": 0, 00:20:09.780 "enable_ktls": false 00:20:09.780 } 00:20:09.780 } 00:20:09.780 ] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "vmd", 00:20:09.780 "config": [] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "accel", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "accel_set_options", 00:20:09.780 "params": { 00:20:09.780 "small_cache_size": 128, 00:20:09.780 "large_cache_size": 16, 00:20:09.780 "task_count": 2048, 00:20:09.780 "sequence_count": 2048, 00:20:09.780 "buf_count": 2048 00:20:09.780 } 00:20:09.780 } 00:20:09.780 ] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "bdev", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "bdev_set_options", 00:20:09.780 "params": { 00:20:09.780 "bdev_io_pool_size": 65535, 00:20:09.780 "bdev_io_cache_size": 256, 00:20:09.780 "bdev_auto_examine": true, 00:20:09.780 "iobuf_small_cache_size": 128, 00:20:09.780 "iobuf_large_cache_size": 16 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "bdev_raid_set_options", 00:20:09.780 "params": { 00:20:09.780 "process_window_size_kb": 1024 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "bdev_iscsi_set_options", 00:20:09.780 "params": { 00:20:09.780 "timeout_sec": 30 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "bdev_nvme_set_options", 00:20:09.780 "params": { 00:20:09.780 "action_on_timeout": "none", 00:20:09.780 "timeout_us": 0, 00:20:09.780 "timeout_admin_us": 0, 00:20:09.780 "keep_alive_timeout_ms": 10000, 00:20:09.780 "arbitration_burst": 0, 00:20:09.780 "low_priority_weight": 0, 00:20:09.780 "medium_priority_weight": 0, 00:20:09.780 "high_priority_weight": 0, 00:20:09.780 "nvme_adminq_poll_period_us": 10000, 00:20:09.780 "nvme_ioq_poll_period_us": 0, 00:20:09.780 "io_queue_requests": 0, 00:20:09.780 "delay_cmd_submit": true, 00:20:09.780 "transport_retry_count": 4, 00:20:09.780 "bdev_retry_count": 3, 00:20:09.780 "transport_ack_timeout": 0, 00:20:09.780 "ctrlr_loss_timeout_sec": 0, 00:20:09.780 "reconnect_delay_sec": 0, 00:20:09.780 "fast_io_fail_timeout_sec": 0, 00:20:09.780 "disable_auto_failback": false, 00:20:09.780 "generate_uuids": false, 00:20:09.780 "transport_tos": 0, 00:20:09.780 "nvme_error_stat": false, 00:20:09.780 "rdma_srq_size": 0, 00:20:09.780 "io_path_stat": false, 00:20:09.780 "allow_accel_sequence": false, 00:20:09.780 "rdma_max_cq_size": 0, 00:20:09.780 "rdma_cm_event_timeout_ms": 0, 00:20:09.780 "dhchap_digests": [ 00:20:09.780 "sha256", 00:20:09.780 "sha384", 00:20:09.780 "sha512" 00:20:09.780 ], 00:20:09.780 "dhchap_dhgroups": [ 00:20:09.780 "null", 00:20:09.780 "ffdhe2048", 00:20:09.780 "ffdhe3072", 00:20:09.780 "ffdhe4096", 00:20:09.780 "ffdhe6144", 00:20:09.780 "ffdhe8192" 00:20:09.780 ] 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "bdev_nvme_set_hotplug", 00:20:09.780 "params": { 00:20:09.780 "period_us": 100000, 00:20:09.780 "enable": false 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "bdev_malloc_create", 00:20:09.780 "params": { 00:20:09.780 "name": "malloc0", 00:20:09.780 "num_blocks": 8192, 00:20:09.780 "block_size": 4096, 00:20:09.780 "physical_block_size": 4096, 00:20:09.780 "uuid": "218ab0ac-7ca5-4d7a-b8d3-33c29ef569f8", 00:20:09.780 "optimal_io_boundary": 0 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "bdev_wait_for_examine" 00:20:09.780 } 00:20:09.780 ] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "nbd", 00:20:09.780 "config": [] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "scheduler", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "framework_set_scheduler", 00:20:09.780 "params": { 00:20:09.780 "name": "static" 00:20:09.780 } 00:20:09.780 } 00:20:09.780 ] 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "subsystem": "nvmf", 00:20:09.780 "config": [ 00:20:09.780 { 00:20:09.780 "method": "nvmf_set_config", 00:20:09.780 "params": { 00:20:09.780 "discovery_filter": "match_any", 00:20:09.780 "admin_cmd_passthru": { 00:20:09.780 "identify_ctrlr": false 00:20:09.780 } 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "nvmf_set_max_subsystems", 00:20:09.780 "params": { 00:20:09.780 "max_subsystems": 1024 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "nvmf_set_crdt", 00:20:09.780 "params": { 00:20:09.780 "crdt1": 0, 00:20:09.780 "crdt2": 0, 00:20:09.780 "crdt3": 0 00:20:09.780 } 00:20:09.780 }, 00:20:09.780 { 00:20:09.780 "method": "nvmf_create_transport", 00:20:09.780 "params": { 00:20:09.780 "trtype": "TCP", 00:20:09.780 "max_queue_depth": 128, 00:20:09.780 "max_io_qpairs_per_ctrlr": 127, 00:20:09.780 "in_capsule_data_size": 4096, 00:20:09.781 "max_io_size": 131072, 00:20:09.781 "io_unit_size": 131072, 00:20:09.781 "max_aq_depth": 128, 00:20:09.781 "num_shared_buffers": 511, 00:20:09.781 "buf_cache_size": 4294967295, 00:20:09.781 "dif_insert_or_strip": false, 00:20:09.781 "zcopy": false, 00:20:09.781 "c2h_success": false, 00:20:09.781 "sock_priority": 0, 00:20:09.781 "abort_timeout_sec": 1, 00:20:09.781 "ack_timeout": 0, 00:20:09.781 "data_wr_pool_size": 0 00:20:09.781 } 00:20:09.781 }, 00:20:09.781 { 00:20:09.781 "method": "nvmf_create_subsystem", 00:20:09.781 "params": { 00:20:09.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.781 "allow_any_host": false, 00:20:09.781 "serial_number": "00000000000000000000", 00:20:09.781 "model_number": "SPDK bdev Controller", 00:20:09.781 "max_namespaces": 32, 00:20:09.781 "min_cntlid": 1, 00:20:09.781 "max_cntlid": 65519, 00:20:09.781 "ana_reporting": false 00:20:09.781 } 00:20:09.781 }, 00:20:09.781 { 00:20:09.781 "method": "nvmf_subsystem_add_host", 00:20:09.781 "params": { 00:20:09.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.781 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.781 "psk": "key0" 00:20:09.781 } 00:20:09.781 }, 00:20:09.781 { 00:20:09.781 "method": "nvmf_subsystem_add_ns", 00:20:09.781 "params": { 00:20:09.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.781 "namespace": { 00:20:09.781 "nsid": 1, 00:20:09.781 "bdev_name": "malloc0", 00:20:09.781 "nguid": "218AB0AC7CA54D7AB8D333C29EF569F8", 00:20:09.781 "uuid": "218ab0ac-7ca5-4d7a-b8d3-33c29ef569f8", 00:20:09.781 "no_auto_visible": false 00:20:09.781 } 00:20:09.781 } 00:20:09.781 }, 00:20:09.781 { 00:20:09.781 "method": "nvmf_subsystem_add_listener", 00:20:09.781 "params": { 00:20:09.781 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.781 "listen_address": { 00:20:09.781 "trtype": "TCP", 00:20:09.781 "adrfam": "IPv4", 00:20:09.781 "traddr": "10.0.0.2", 00:20:09.781 "trsvcid": "4420" 00:20:09.781 }, 00:20:09.781 "secure_channel": true 00:20:09.781 } 00:20:09.781 } 00:20:09.781 ] 00:20:09.781 } 00:20:09.781 ] 00:20:09.781 }' 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1753451 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1753451 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1753451 ']' 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.781 12:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.040 [2024-07-15 12:54:40.778401] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:10.040 [2024-07-15 12:54:40.778448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.040 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.040 [2024-07-15 12:54:40.843217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.040 [2024-07-15 12:54:40.912211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.040 [2024-07-15 12:54:40.912259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.040 [2024-07-15 12:54:40.912266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.040 [2024-07-15 12:54:40.912272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.040 [2024-07-15 12:54:40.912276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.040 [2024-07-15 12:54:40.912331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.300 [2024-07-15 12:54:41.124183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.300 [2024-07-15 12:54:41.156199] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.300 [2024-07-15 12:54:41.171364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.868 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.868 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1753696 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1753696 /var/tmp/bdevperf.sock 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1753696 ']' 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:10.869 "subsystems": [ 00:20:10.869 { 00:20:10.869 "subsystem": "keyring", 00:20:10.869 "config": [ 00:20:10.869 { 00:20:10.869 "method": "keyring_file_add_key", 00:20:10.869 "params": { 00:20:10.869 "name": "key0", 00:20:10.869 "path": "/tmp/tmp.B8of0O0mlK" 00:20:10.869 } 00:20:10.869 } 00:20:10.869 ] 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "subsystem": "iobuf", 00:20:10.869 "config": [ 00:20:10.869 { 00:20:10.869 "method": "iobuf_set_options", 00:20:10.869 "params": { 00:20:10.869 "small_pool_count": 8192, 00:20:10.869 "large_pool_count": 1024, 00:20:10.869 "small_bufsize": 8192, 00:20:10.869 "large_bufsize": 135168 00:20:10.869 } 00:20:10.869 } 00:20:10.869 ] 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "subsystem": "sock", 00:20:10.869 "config": [ 00:20:10.869 { 00:20:10.869 "method": "sock_set_default_impl", 00:20:10.869 "params": { 00:20:10.869 "impl_name": "posix" 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "sock_impl_set_options", 00:20:10.869 "params": { 00:20:10.869 "impl_name": "ssl", 00:20:10.869 "recv_buf_size": 4096, 00:20:10.869 "send_buf_size": 4096, 00:20:10.869 "enable_recv_pipe": true, 00:20:10.869 "enable_quickack": false, 00:20:10.869 "enable_placement_id": 0, 00:20:10.869 "enable_zerocopy_send_server": true, 00:20:10.869 "enable_zerocopy_send_client": false, 00:20:10.869 "zerocopy_threshold": 0, 00:20:10.869 "tls_version": 0, 00:20:10.869 "enable_ktls": false 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "sock_impl_set_options", 00:20:10.869 "params": { 00:20:10.869 "impl_name": "posix", 00:20:10.869 "recv_buf_size": 2097152, 00:20:10.869 "send_buf_size": 2097152, 00:20:10.869 "enable_recv_pipe": true, 00:20:10.869 "enable_quickack": false, 00:20:10.869 "enable_placement_id": 0, 00:20:10.869 "enable_zerocopy_send_server": true, 00:20:10.869 "enable_zerocopy_send_client": false, 00:20:10.869 "zerocopy_threshold": 0, 00:20:10.869 "tls_version": 0, 00:20:10.869 "enable_ktls": false 00:20:10.869 } 00:20:10.869 } 00:20:10.869 ] 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "subsystem": "vmd", 00:20:10.869 "config": [] 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "subsystem": "accel", 00:20:10.869 "config": [ 00:20:10.869 { 00:20:10.869 "method": "accel_set_options", 00:20:10.869 "params": { 00:20:10.869 "small_cache_size": 128, 00:20:10.869 "large_cache_size": 16, 00:20:10.869 "task_count": 2048, 00:20:10.869 "sequence_count": 2048, 00:20:10.869 "buf_count": 2048 00:20:10.869 } 00:20:10.869 } 00:20:10.869 ] 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "subsystem": "bdev", 00:20:10.869 "config": [ 00:20:10.869 { 00:20:10.869 "method": "bdev_set_options", 00:20:10.869 "params": { 00:20:10.869 "bdev_io_pool_size": 65535, 00:20:10.869 "bdev_io_cache_size": 256, 00:20:10.869 "bdev_auto_examine": true, 00:20:10.869 "iobuf_small_cache_size": 128, 00:20:10.869 "iobuf_large_cache_size": 16 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_raid_set_options", 00:20:10.869 "params": { 00:20:10.869 "process_window_size_kb": 1024 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_iscsi_set_options", 00:20:10.869 "params": { 00:20:10.869 "timeout_sec": 30 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_nvme_set_options", 00:20:10.869 "params": { 00:20:10.869 "action_on_timeout": "none", 00:20:10.869 "timeout_us": 0, 00:20:10.869 "timeout_admin_us": 0, 00:20:10.869 "keep_alive_timeout_ms": 10000, 00:20:10.869 "arbitration_burst": 0, 00:20:10.869 "low_priority_weight": 0, 00:20:10.869 "medium_priority_weight": 0, 00:20:10.869 "high_priority_weight": 0, 00:20:10.869 "nvme_adminq_poll_period_us": 10000, 00:20:10.869 "nvme_ioq_poll_period_us": 0, 00:20:10.869 "io_queue_requests": 512, 00:20:10.869 "delay_cmd_submit": true, 00:20:10.869 "transport_retry_count": 4, 00:20:10.869 "bdev_retry_count": 3, 00:20:10.869 "transport_ack_timeout": 0, 00:20:10.869 "ctrlr_loss_timeout_sec": 0, 00:20:10.869 "reconnect_delay_sec": 0, 00:20:10.869 "fast_io_fail_timeout_sec": 0, 00:20:10.869 "disable_auto_failback": false, 00:20:10.869 "generate_uuids": false, 00:20:10.869 "transport_tos": 0, 00:20:10.869 "nvme_error_stat": false, 00:20:10.869 "rdma_srq_size": 0, 00:20:10.869 "io_path_stat": false, 00:20:10.869 "allow_accel_sequence": false, 00:20:10.869 "rdma_max_cq_size": 0, 00:20:10.869 "rdma_cm_event_timeout_ms": 0, 00:20:10.869 "dhchap_digests": [ 00:20:10.869 "sha256", 00:20:10.869 "sha384", 00:20:10.869 "sha512" 00:20:10.869 ], 00:20:10.869 "dhchap_dhgroups": [ 00:20:10.869 "null", 00:20:10.869 "ffdhe2048", 00:20:10.869 "ffdhe3072", 00:20:10.869 "ffdhe4096", 00:20:10.869 "ffdhe6144", 00:20:10.869 "ffdhe8192" 00:20:10.869 ] 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_nvme_attach_controller", 00:20:10.869 "params": { 00:20:10.869 "name": "nvme0", 00:20:10.869 "trtype": "TCP", 00:20:10.869 "adrfam": "IPv4", 00:20:10.869 "traddr": "10.0.0.2", 00:20:10.869 "trsvcid": "4420", 00:20:10.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.869 "prchk_reftag": false, 00:20:10.869 "prchk_guard": false, 00:20:10.869 "ctrlr_loss_timeout_sec": 0, 00:20:10.869 "reconnect_delay_sec": 0, 00:20:10.869 "fast_io_fail_timeout_sec": 0, 00:20:10.869 "psk": "key0", 00:20:10.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.869 "hdgst": false, 00:20:10.869 "ddgst": false 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_nvme_set_hotplug", 00:20:10.869 "params": { 00:20:10.869 "period_us": 100000, 00:20:10.869 "enable": false 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_enable_histogram", 00:20:10.869 "params": { 00:20:10.869 "name": "nvme0n1", 00:20:10.869 "enable": true 00:20:10.869 } 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "method": "bdev_wait_for_examine" 00:20:10.869 } 00:20:10.869 ] 00:20:10.869 }, 00:20:10.869 { 00:20:10.869 "subsystem": "nbd", 00:20:10.869 "config": [] 00:20:10.869 } 00:20:10.869 ] 00:20:10.869 }' 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.869 12:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.869 [2024-07-15 12:54:41.661471] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:10.869 [2024-07-15 12:54:41.661523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753696 ] 00:20:10.869 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.869 [2024-07-15 12:54:41.727915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.869 [2024-07-15 12:54:41.801689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.129 [2024-07-15 12:54:41.952875] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.697 12:54:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.697 12:54:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:11.697 12:54:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.697 12:54:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:11.956 12:54:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.956 12:54:42 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.956 Running I/O for 1 seconds... 00:20:12.890 00:20:12.890 Latency(us) 00:20:12.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.890 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.890 Verification LBA range: start 0x0 length 0x2000 00:20:12.890 nvme0n1 : 1.02 5453.76 21.30 0.00 0.00 23261.78 4758.48 28151.99 00:20:12.890 =================================================================================================================== 00:20:12.890 Total : 5453.76 21.30 0.00 0.00 23261.78 4758.48 28151.99 00:20:12.890 0 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:12.890 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:12.890 nvmf_trace.0 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1753696 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1753696 ']' 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1753696 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1753696 00:20:13.148 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:13.149 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:13.149 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1753696' 00:20:13.149 killing process with pid 1753696 00:20:13.149 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1753696 00:20:13.149 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.149 00:20:13.149 Latency(us) 00:20:13.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.149 =================================================================================================================== 00:20:13.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.149 12:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1753696 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.407 rmmod nvme_tcp 00:20:13.407 rmmod nvme_fabrics 00:20:13.407 rmmod nvme_keyring 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:13.407 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1753451 ']' 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1753451 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1753451 ']' 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1753451 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1753451 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1753451' 00:20:13.408 killing process with pid 1753451 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1753451 00:20:13.408 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1753451 00:20:13.666 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.667 12:54:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.572 12:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.572 12:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xiaTmRTwvP /tmp/tmp.1LQ87R31W7 /tmp/tmp.B8of0O0mlK 00:20:15.572 00:20:15.572 real 1m25.622s 00:20:15.572 user 2m12.685s 00:20:15.572 sys 0m28.870s 00:20:15.572 12:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:15.572 12:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.572 ************************************ 00:20:15.572 END TEST nvmf_tls 00:20:15.572 ************************************ 00:20:15.572 12:54:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:15.572 12:54:46 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.572 12:54:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:15.572 12:54:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.572 12:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.832 ************************************ 00:20:15.832 START TEST nvmf_fips 00:20:15.832 ************************************ 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.832 * Looking for test storage... 00:20:15.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:15.832 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:15.833 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:16.092 Error setting digest 00:20:16.092 00F24CB6207F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:16.092 00F24CB6207F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.092 12:54:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:21.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:21.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.365 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:21.366 Found net devices under 0000:86:00.0: cvl_0_0 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:21.366 Found net devices under 0000:86:00.1: cvl_0_1 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.366 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:21.625 00:20:21.625 --- 10.0.0.2 ping statistics --- 00:20:21.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.625 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:21.625 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:20:21.626 00:20:21.626 --- 10.0.0.1 ping statistics --- 00:20:21.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.626 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.626 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1757685 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1757685 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1757685 ']' 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.886 12:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.886 [2024-07-15 12:54:52.646805] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:21.886 [2024-07-15 12:54:52.646851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.886 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.886 [2024-07-15 12:54:52.714660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.886 [2024-07-15 12:54:52.791314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.886 [2024-07-15 12:54:52.791348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.886 [2024-07-15 12:54:52.791355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.886 [2024-07-15 12:54:52.791360] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.886 [2024-07-15 12:54:52.791365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.886 [2024-07-15 12:54:52.791381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:22.819 [2024-07-15 12:54:53.622056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.819 [2024-07-15 12:54:53.638055] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.819 [2024-07-15 12:54:53.638211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.819 [2024-07-15 12:54:53.666388] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:22.819 malloc0 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1757769 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1757769 /var/tmp/bdevperf.sock 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1757769 ']' 00:20:22.819 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.820 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.820 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.820 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.820 12:54:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.820 [2024-07-15 12:54:53.760583] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:22.820 [2024-07-15 12:54:53.760634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757769 ] 00:20:23.078 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.078 [2024-07-15 12:54:53.827809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.078 [2024-07-15 12:54:53.900575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.644 12:54:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.644 12:54:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:23.644 12:54:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:23.903 [2024-07-15 12:54:54.707601] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.903 [2024-07-15 12:54:54.707683] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:23.903 TLSTESTn1 00:20:23.903 12:54:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:24.162 Running I/O for 10 seconds... 00:20:34.157 00:20:34.157 Latency(us) 00:20:34.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.157 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.157 Verification LBA range: start 0x0 length 0x2000 00:20:34.157 TLSTESTn1 : 10.06 4978.13 19.45 0.00 0.00 25628.71 7009.50 56531.92 00:20:34.157 =================================================================================================================== 00:20:34.157 Total : 4978.13 19.45 0.00 0.00 25628.71 7009.50 56531.92 00:20:34.157 0 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:34.157 12:55:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:34.157 nvmf_trace.0 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1757769 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1757769 ']' 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1757769 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.157 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757769 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757769' 00:20:34.416 killing process with pid 1757769 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1757769 00:20:34.416 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.416 00:20:34.416 Latency(us) 00:20:34.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.416 =================================================================================================================== 00:20:34.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.416 [2024-07-15 12:55:05.116508] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1757769 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:34.416 rmmod nvme_tcp 00:20:34.416 rmmod nvme_fabrics 00:20:34.416 rmmod nvme_keyring 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1757685 ']' 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1757685 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1757685 ']' 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1757685 00:20:34.416 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757685 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757685' 00:20:34.675 killing process with pid 1757685 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1757685 00:20:34.675 [2024-07-15 12:55:05.412550] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1757685 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.675 12:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.251 12:55:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.251 12:55:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:37.251 00:20:37.251 real 0m21.128s 00:20:37.251 user 0m22.640s 00:20:37.251 sys 0m9.362s 00:20:37.251 12:55:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.251 12:55:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 ************************************ 00:20:37.251 END TEST nvmf_fips 00:20:37.251 ************************************ 00:20:37.251 12:55:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:37.251 12:55:07 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:37.251 12:55:07 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:37.251 12:55:07 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:37.251 12:55:07 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:37.251 12:55:07 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.251 12:55:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.526 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.526 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:42.526 12:55:13 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:42.526 12:55:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:42.526 12:55:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.526 12:55:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.526 ************************************ 00:20:42.526 START TEST nvmf_perf_adq 00:20:42.526 ************************************ 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:42.527 * Looking for test storage... 00:20:42.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.527 12:55:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:49.093 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:49.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:49.093 Found net devices under 0000:86:00.0: cvl_0_0 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:49.093 Found net devices under 0000:86:00.1: cvl_0_1 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:49.093 12:55:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:49.093 12:55:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:51.628 12:55:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.900 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.901 12:55:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:56.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:56.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:56.901 Found net devices under 0000:86:00.0: cvl_0_0 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:56.901 Found net devices under 0000:86:00.1: cvl_0_1 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:20:56.901 00:20:56.901 --- 10.0.0.2 ping statistics --- 00:20:56.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.901 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:20:56.901 00:20:56.901 --- 10.0.0.1 ping statistics --- 00:20:56.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.901 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1767653 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1767653 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1767653 ']' 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.901 12:55:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.901 [2024-07-15 12:55:27.324873] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:56.901 [2024-07-15 12:55:27.324915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.902 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.902 [2024-07-15 12:55:27.391768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.902 [2024-07-15 12:55:27.473028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.902 [2024-07-15 12:55:27.473066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.902 [2024-07-15 12:55:27.473073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.902 [2024-07-15 12:55:27.473079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.902 [2024-07-15 12:55:27.473087] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.902 [2024-07-15 12:55:27.473138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.902 [2024-07-15 12:55:27.473262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.902 [2024-07-15 12:55:27.473290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.902 [2024-07-15 12:55:27.473292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 [2024-07-15 12:55:28.322870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 Malloc1 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 [2024-07-15 12:55:28.370790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1767901 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:57.465 12:55:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:57.465 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:59.993 "tick_rate": 2300000000, 00:20:59.993 "poll_groups": [ 00:20:59.993 { 00:20:59.993 "name": "nvmf_tgt_poll_group_000", 00:20:59.993 "admin_qpairs": 1, 00:20:59.993 "io_qpairs": 1, 00:20:59.993 "current_admin_qpairs": 1, 00:20:59.993 "current_io_qpairs": 1, 00:20:59.993 "pending_bdev_io": 0, 00:20:59.993 "completed_nvme_io": 20050, 00:20:59.993 "transports": [ 00:20:59.993 { 00:20:59.993 "trtype": "TCP" 00:20:59.993 } 00:20:59.993 ] 00:20:59.993 }, 00:20:59.993 { 00:20:59.993 "name": "nvmf_tgt_poll_group_001", 00:20:59.993 "admin_qpairs": 0, 00:20:59.993 "io_qpairs": 1, 00:20:59.993 "current_admin_qpairs": 0, 00:20:59.993 "current_io_qpairs": 1, 00:20:59.993 "pending_bdev_io": 0, 00:20:59.993 "completed_nvme_io": 20260, 00:20:59.993 "transports": [ 00:20:59.993 { 00:20:59.993 "trtype": "TCP" 00:20:59.993 } 00:20:59.993 ] 00:20:59.993 }, 00:20:59.993 { 00:20:59.993 "name": "nvmf_tgt_poll_group_002", 00:20:59.993 "admin_qpairs": 0, 00:20:59.993 "io_qpairs": 1, 00:20:59.993 "current_admin_qpairs": 0, 00:20:59.993 "current_io_qpairs": 1, 00:20:59.993 "pending_bdev_io": 0, 00:20:59.993 "completed_nvme_io": 20171, 00:20:59.993 "transports": [ 00:20:59.993 { 00:20:59.993 "trtype": "TCP" 00:20:59.993 } 00:20:59.993 ] 00:20:59.993 }, 00:20:59.993 { 00:20:59.993 "name": "nvmf_tgt_poll_group_003", 00:20:59.993 "admin_qpairs": 0, 00:20:59.993 "io_qpairs": 1, 00:20:59.993 "current_admin_qpairs": 0, 00:20:59.993 "current_io_qpairs": 1, 00:20:59.993 "pending_bdev_io": 0, 00:20:59.993 "completed_nvme_io": 20067, 00:20:59.993 "transports": [ 00:20:59.993 { 00:20:59.993 "trtype": "TCP" 00:20:59.993 } 00:20:59.993 ] 00:20:59.993 } 00:20:59.993 ] 00:20:59.993 }' 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:59.993 12:55:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1767901 00:21:08.096 Initializing NVMe Controllers 00:21:08.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:08.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:08.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:08.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:08.096 Initialization complete. Launching workers. 00:21:08.096 ======================================================== 00:21:08.096 Latency(us) 00:21:08.096 Device Information : IOPS MiB/s Average min max 00:21:08.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10592.03 41.38 6041.81 2590.84 9977.44 00:21:08.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10696.93 41.78 5985.11 2880.53 9560.36 00:21:08.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10638.43 41.56 6017.64 1933.53 9542.44 00:21:08.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10617.53 41.47 6028.58 2296.28 9259.26 00:21:08.096 ======================================================== 00:21:08.096 Total : 42544.91 166.19 6018.21 1933.53 9977.44 00:21:08.096 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.096 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.096 rmmod nvme_tcp 00:21:08.097 rmmod nvme_fabrics 00:21:08.097 rmmod nvme_keyring 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1767653 ']' 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1767653 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1767653 ']' 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1767653 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1767653 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1767653' 00:21:08.097 killing process with pid 1767653 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1767653 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1767653 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.097 12:55:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.002 12:55:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.002 12:55:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:10.002 12:55:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:11.427 12:55:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:13.337 12:55:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.609 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.609 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.610 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.610 12:55:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:18.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:21:18.610 00:21:18.610 --- 10.0.0.2 ping statistics --- 00:21:18.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.610 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:21:18.610 00:21:18.610 --- 10.0.0.1 ping statistics --- 00:21:18.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.610 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:18.610 net.core.busy_poll = 1 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:18.610 net.core.busy_read = 1 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1771552 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1771552 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1771552 ']' 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.610 12:55:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.610 [2024-07-15 12:55:49.560454] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:18.610 [2024-07-15 12:55:49.560506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.868 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.868 [2024-07-15 12:55:49.632578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.868 [2024-07-15 12:55:49.712439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.868 [2024-07-15 12:55:49.712478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.868 [2024-07-15 12:55:49.712485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.868 [2024-07-15 12:55:49.712491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.868 [2024-07-15 12:55:49.712496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.868 [2024-07-15 12:55:49.712552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.868 [2024-07-15 12:55:49.712659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.868 [2024-07-15 12:55:49.712765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.868 [2024-07-15 12:55:49.712767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.433 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.433 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:19.433 12:55:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.433 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.433 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 [2024-07-15 12:55:50.561201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 Malloc1 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 [2024-07-15 12:55:50.612870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1771739 00:21:19.692 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:19.692 12:55:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:19.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.849 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:21.849 12:55:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.849 12:55:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.849 12:55:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.849 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:21.849 "tick_rate": 2300000000, 00:21:21.849 "poll_groups": [ 00:21:21.849 { 00:21:21.849 "name": "nvmf_tgt_poll_group_000", 00:21:21.849 "admin_qpairs": 1, 00:21:21.849 "io_qpairs": 3, 00:21:21.849 "current_admin_qpairs": 1, 00:21:21.849 "current_io_qpairs": 3, 00:21:21.849 "pending_bdev_io": 0, 00:21:21.849 "completed_nvme_io": 29050, 00:21:21.849 "transports": [ 00:21:21.849 { 00:21:21.849 "trtype": "TCP" 00:21:21.849 } 00:21:21.849 ] 00:21:21.849 }, 00:21:21.849 { 00:21:21.849 "name": "nvmf_tgt_poll_group_001", 00:21:21.849 "admin_qpairs": 0, 00:21:21.849 "io_qpairs": 1, 00:21:21.849 "current_admin_qpairs": 0, 00:21:21.850 "current_io_qpairs": 1, 00:21:21.850 "pending_bdev_io": 0, 00:21:21.850 "completed_nvme_io": 28213, 00:21:21.850 "transports": [ 00:21:21.850 { 00:21:21.850 "trtype": "TCP" 00:21:21.850 } 00:21:21.850 ] 00:21:21.850 }, 00:21:21.850 { 00:21:21.850 "name": "nvmf_tgt_poll_group_002", 00:21:21.850 "admin_qpairs": 0, 00:21:21.850 "io_qpairs": 0, 00:21:21.850 "current_admin_qpairs": 0, 00:21:21.850 "current_io_qpairs": 0, 00:21:21.850 "pending_bdev_io": 0, 00:21:21.850 "completed_nvme_io": 0, 00:21:21.850 "transports": [ 00:21:21.850 { 00:21:21.850 "trtype": "TCP" 00:21:21.850 } 00:21:21.850 ] 00:21:21.850 }, 00:21:21.850 { 00:21:21.850 "name": "nvmf_tgt_poll_group_003", 00:21:21.850 "admin_qpairs": 0, 00:21:21.850 "io_qpairs": 0, 00:21:21.850 "current_admin_qpairs": 0, 00:21:21.850 "current_io_qpairs": 0, 00:21:21.850 "pending_bdev_io": 0, 00:21:21.850 "completed_nvme_io": 0, 00:21:21.850 "transports": [ 00:21:21.850 { 00:21:21.850 "trtype": "TCP" 00:21:21.850 } 00:21:21.850 ] 00:21:21.850 } 00:21:21.850 ] 00:21:21.850 }' 00:21:21.850 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:21.850 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:21.850 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:21.850 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:21.850 12:55:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1771739 00:21:29.960 Initializing NVMe Controllers 00:21:29.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:29.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:29.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:29.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:29.960 Initialization complete. Launching workers. 00:21:29.960 ======================================================== 00:21:29.960 Latency(us) 00:21:29.960 Device Information : IOPS MiB/s Average min max 00:21:29.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4796.62 18.74 13342.67 1689.28 57713.69 00:21:29.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5053.12 19.74 12665.38 1517.65 58124.05 00:21:29.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5720.71 22.35 11221.76 1482.45 58153.57 00:21:29.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14863.46 58.06 4305.18 1131.14 44815.84 00:21:29.960 ======================================================== 00:21:29.960 Total : 30433.92 118.88 8417.77 1131.14 58153.57 00:21:29.960 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.960 rmmod nvme_tcp 00:21:29.960 rmmod nvme_fabrics 00:21:29.960 rmmod nvme_keyring 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1771552 ']' 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1771552 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1771552 ']' 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1771552 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.960 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1771552 00:21:30.218 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.219 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.219 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1771552' 00:21:30.219 killing process with pid 1771552 00:21:30.219 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1771552 00:21:30.219 12:56:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1771552 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.219 12:56:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.508 12:56:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.508 12:56:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:33.508 00:21:33.508 real 0m51.008s 00:21:33.508 user 2m49.588s 00:21:33.508 sys 0m9.598s 00:21:33.508 12:56:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.508 12:56:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.508 ************************************ 00:21:33.508 END TEST nvmf_perf_adq 00:21:33.508 ************************************ 00:21:33.508 12:56:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:33.508 12:56:04 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:33.508 12:56:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:33.508 12:56:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.508 12:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:33.508 ************************************ 00:21:33.508 START TEST nvmf_shutdown 00:21:33.508 ************************************ 00:21:33.508 12:56:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:33.508 * Looking for test storage... 00:21:33.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.509 ************************************ 00:21:33.509 START TEST nvmf_shutdown_tc1 00:21:33.509 ************************************ 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.509 12:56:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.079 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.079 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.080 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.080 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.080 12:56:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:21:40.080 00:21:40.080 --- 10.0.0.2 ping statistics --- 00:21:40.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.080 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:21:40.080 00:21:40.080 --- 10.0.0.1 ping statistics --- 00:21:40.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.080 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1777173 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1777173 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1777173 ']' 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.080 12:56:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.080 [2024-07-15 12:56:10.259684] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:40.080 [2024-07-15 12:56:10.259727] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.080 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.080 [2024-07-15 12:56:10.331195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.080 [2024-07-15 12:56:10.410869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.080 [2024-07-15 12:56:10.410904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.080 [2024-07-15 12:56:10.410911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.080 [2024-07-15 12:56:10.410917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.080 [2024-07-15 12:56:10.410922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.080 [2024-07-15 12:56:10.411042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.080 [2024-07-15 12:56:10.411147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.080 [2024-07-15 12:56:10.411265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:40.080 [2024-07-15 12:56:10.411271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.336 [2024-07-15 12:56:11.127030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.336 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.336 Malloc1 00:21:40.336 [2024-07-15 12:56:11.222997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.336 Malloc2 00:21:40.336 Malloc3 00:21:40.592 Malloc4 00:21:40.592 Malloc5 00:21:40.592 Malloc6 00:21:40.592 Malloc7 00:21:40.592 Malloc8 00:21:40.849 Malloc9 00:21:40.849 Malloc10 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1777449 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1777449 /var/tmp/bdevperf.sock 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1777449 ']' 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.849 { 00:21:40.849 "params": { 00:21:40.849 "name": "Nvme$subsystem", 00:21:40.849 "trtype": "$TEST_TRANSPORT", 00:21:40.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.849 "adrfam": "ipv4", 00:21:40.849 "trsvcid": "$NVMF_PORT", 00:21:40.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.849 "hdgst": ${hdgst:-false}, 00:21:40.849 "ddgst": ${ddgst:-false} 00:21:40.849 }, 00:21:40.849 "method": "bdev_nvme_attach_controller" 00:21:40.849 } 00:21:40.849 EOF 00:21:40.849 )") 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.849 { 00:21:40.849 "params": { 00:21:40.849 "name": "Nvme$subsystem", 00:21:40.849 "trtype": "$TEST_TRANSPORT", 00:21:40.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.849 "adrfam": "ipv4", 00:21:40.849 "trsvcid": "$NVMF_PORT", 00:21:40.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.849 "hdgst": ${hdgst:-false}, 00:21:40.849 "ddgst": ${ddgst:-false} 00:21:40.849 }, 00:21:40.849 "method": "bdev_nvme_attach_controller" 00:21:40.849 } 00:21:40.849 EOF 00:21:40.849 )") 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.849 { 00:21:40.849 "params": { 00:21:40.849 "name": "Nvme$subsystem", 00:21:40.849 "trtype": "$TEST_TRANSPORT", 00:21:40.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.849 "adrfam": "ipv4", 00:21:40.849 "trsvcid": "$NVMF_PORT", 00:21:40.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.849 "hdgst": ${hdgst:-false}, 00:21:40.849 "ddgst": ${ddgst:-false} 00:21:40.849 }, 00:21:40.849 "method": "bdev_nvme_attach_controller" 00:21:40.849 } 00:21:40.849 EOF 00:21:40.849 )") 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.849 { 00:21:40.849 "params": { 00:21:40.849 "name": "Nvme$subsystem", 00:21:40.849 "trtype": "$TEST_TRANSPORT", 00:21:40.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.849 "adrfam": "ipv4", 00:21:40.849 "trsvcid": "$NVMF_PORT", 00:21:40.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.849 "hdgst": ${hdgst:-false}, 00:21:40.849 "ddgst": ${ddgst:-false} 00:21:40.849 }, 00:21:40.849 "method": "bdev_nvme_attach_controller" 00:21:40.849 } 00:21:40.849 EOF 00:21:40.849 )") 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.849 { 00:21:40.849 "params": { 00:21:40.849 "name": "Nvme$subsystem", 00:21:40.849 "trtype": "$TEST_TRANSPORT", 00:21:40.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.849 "adrfam": "ipv4", 00:21:40.849 "trsvcid": "$NVMF_PORT", 00:21:40.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.849 "hdgst": ${hdgst:-false}, 00:21:40.849 "ddgst": ${ddgst:-false} 00:21:40.849 }, 00:21:40.849 "method": "bdev_nvme_attach_controller" 00:21:40.849 } 00:21:40.849 EOF 00:21:40.849 )") 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.849 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.849 { 00:21:40.849 "params": { 00:21:40.849 "name": "Nvme$subsystem", 00:21:40.849 "trtype": "$TEST_TRANSPORT", 00:21:40.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "$NVMF_PORT", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.850 "hdgst": ${hdgst:-false}, 00:21:40.850 "ddgst": ${ddgst:-false} 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 } 00:21:40.850 EOF 00:21:40.850 )") 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.850 { 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme$subsystem", 00:21:40.850 "trtype": "$TEST_TRANSPORT", 00:21:40.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "$NVMF_PORT", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.850 "hdgst": ${hdgst:-false}, 00:21:40.850 "ddgst": ${ddgst:-false} 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 } 00:21:40.850 EOF 00:21:40.850 )") 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.850 [2024-07-15 12:56:11.695862] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:40.850 [2024-07-15 12:56:11.695911] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.850 { 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme$subsystem", 00:21:40.850 "trtype": "$TEST_TRANSPORT", 00:21:40.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "$NVMF_PORT", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.850 "hdgst": ${hdgst:-false}, 00:21:40.850 "ddgst": ${ddgst:-false} 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 } 00:21:40.850 EOF 00:21:40.850 )") 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.850 { 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme$subsystem", 00:21:40.850 "trtype": "$TEST_TRANSPORT", 00:21:40.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "$NVMF_PORT", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.850 "hdgst": ${hdgst:-false}, 00:21:40.850 "ddgst": ${ddgst:-false} 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 } 00:21:40.850 EOF 00:21:40.850 )") 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.850 { 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme$subsystem", 00:21:40.850 "trtype": "$TEST_TRANSPORT", 00:21:40.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "$NVMF_PORT", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.850 "hdgst": ${hdgst:-false}, 00:21:40.850 "ddgst": ${ddgst:-false} 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 } 00:21:40.850 EOF 00:21:40.850 )") 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:40.850 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:40.850 12:56:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme1", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme2", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme3", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme4", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme5", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme6", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme7", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme8", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme9", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 },{ 00:21:40.850 "params": { 00:21:40.850 "name": "Nvme10", 00:21:40.850 "trtype": "tcp", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "adrfam": "ipv4", 00:21:40.850 "trsvcid": "4420", 00:21:40.850 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:40.850 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:40.850 "hdgst": false, 00:21:40.850 "ddgst": false 00:21:40.850 }, 00:21:40.850 "method": "bdev_nvme_attach_controller" 00:21:40.850 }' 00:21:40.850 [2024-07-15 12:56:11.767253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.107 [2024-07-15 12:56:11.840782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1777449 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:42.541 12:56:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:43.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1777449 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1777173 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.473 { 00:21:43.473 "params": { 00:21:43.473 "name": "Nvme$subsystem", 00:21:43.473 "trtype": "$TEST_TRANSPORT", 00:21:43.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.473 "adrfam": "ipv4", 00:21:43.473 "trsvcid": "$NVMF_PORT", 00:21:43.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.473 "hdgst": ${hdgst:-false}, 00:21:43.473 "ddgst": ${ddgst:-false} 00:21:43.473 }, 00:21:43.473 "method": "bdev_nvme_attach_controller" 00:21:43.473 } 00:21:43.473 EOF 00:21:43.473 )") 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.473 { 00:21:43.473 "params": { 00:21:43.473 "name": "Nvme$subsystem", 00:21:43.473 "trtype": "$TEST_TRANSPORT", 00:21:43.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.473 "adrfam": "ipv4", 00:21:43.473 "trsvcid": "$NVMF_PORT", 00:21:43.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.473 "hdgst": ${hdgst:-false}, 00:21:43.473 "ddgst": ${ddgst:-false} 00:21:43.473 }, 00:21:43.473 "method": "bdev_nvme_attach_controller" 00:21:43.473 } 00:21:43.473 EOF 00:21:43.473 )") 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.473 { 00:21:43.473 "params": { 00:21:43.473 "name": "Nvme$subsystem", 00:21:43.473 "trtype": "$TEST_TRANSPORT", 00:21:43.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.473 "adrfam": "ipv4", 00:21:43.473 "trsvcid": "$NVMF_PORT", 00:21:43.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.473 "hdgst": ${hdgst:-false}, 00:21:43.473 "ddgst": ${ddgst:-false} 00:21:43.473 }, 00:21:43.473 "method": "bdev_nvme_attach_controller" 00:21:43.473 } 00:21:43.473 EOF 00:21:43.473 )") 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.473 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 [2024-07-15 12:56:14.255026] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:43.474 [2024-07-15 12:56:14.255077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777939 ] 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.474 { 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme$subsystem", 00:21:43.474 "trtype": "$TEST_TRANSPORT", 00:21:43.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "$NVMF_PORT", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.474 "hdgst": ${hdgst:-false}, 00:21:43.474 "ddgst": ${ddgst:-false} 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 } 00:21:43.474 EOF 00:21:43.474 )") 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:43.474 12:56:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme1", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme2", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme3", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme4", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme5", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme6", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme7", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.474 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.474 "hdgst": false, 00:21:43.474 "ddgst": false 00:21:43.474 }, 00:21:43.474 "method": "bdev_nvme_attach_controller" 00:21:43.474 },{ 00:21:43.474 "params": { 00:21:43.474 "name": "Nvme8", 00:21:43.474 "trtype": "tcp", 00:21:43.474 "traddr": "10.0.0.2", 00:21:43.474 "adrfam": "ipv4", 00:21:43.474 "trsvcid": "4420", 00:21:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.475 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.475 "hdgst": false, 00:21:43.475 "ddgst": false 00:21:43.475 }, 00:21:43.475 "method": "bdev_nvme_attach_controller" 00:21:43.475 },{ 00:21:43.475 "params": { 00:21:43.475 "name": "Nvme9", 00:21:43.475 "trtype": "tcp", 00:21:43.475 "traddr": "10.0.0.2", 00:21:43.475 "adrfam": "ipv4", 00:21:43.475 "trsvcid": "4420", 00:21:43.475 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.475 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.475 "hdgst": false, 00:21:43.475 "ddgst": false 00:21:43.475 }, 00:21:43.475 "method": "bdev_nvme_attach_controller" 00:21:43.475 },{ 00:21:43.475 "params": { 00:21:43.475 "name": "Nvme10", 00:21:43.475 "trtype": "tcp", 00:21:43.475 "traddr": "10.0.0.2", 00:21:43.475 "adrfam": "ipv4", 00:21:43.475 "trsvcid": "4420", 00:21:43.475 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.475 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.475 "hdgst": false, 00:21:43.475 "ddgst": false 00:21:43.475 }, 00:21:43.475 "method": "bdev_nvme_attach_controller" 00:21:43.475 }' 00:21:43.475 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.475 [2024-07-15 12:56:14.325971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.475 [2024-07-15 12:56:14.400245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.373 Running I/O for 1 seconds... 00:21:46.303 00:21:46.303 Latency(us) 00:21:46.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme1n1 : 1.07 238.35 14.90 0.00 0.00 265965.97 18350.08 226127.69 00:21:46.303 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme2n1 : 1.05 244.00 15.25 0.00 0.00 255805.89 17096.35 215186.03 00:21:46.303 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme3n1 : 1.11 288.52 18.03 0.00 0.00 213368.12 15728.64 210627.01 00:21:46.303 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme4n1 : 1.10 295.45 18.47 0.00 0.00 201857.10 7807.33 218833.25 00:21:46.303 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme5n1 : 1.12 288.81 18.05 0.00 0.00 206694.49 3162.82 214274.23 00:21:46.303 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme6n1 : 1.12 296.45 18.53 0.00 0.00 197224.22 7921.31 213362.42 00:21:46.303 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme7n1 : 1.13 285.81 17.86 0.00 0.00 202857.13 1652.65 217009.64 00:21:46.303 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme8n1 : 1.12 286.64 17.92 0.00 0.00 199118.67 18578.03 227039.50 00:21:46.303 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme9n1 : 1.13 297.16 18.57 0.00 0.00 188872.28 2379.24 224304.08 00:21:46.303 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.303 Verification LBA range: start 0x0 length 0x400 00:21:46.303 Nvme10n1 : 1.17 328.28 20.52 0.00 0.00 169462.28 3875.17 238892.97 00:21:46.303 =================================================================================================================== 00:21:46.303 Total : 2849.48 178.09 0.00 0.00 207071.68 1652.65 238892.97 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.561 rmmod nvme_tcp 00:21:46.561 rmmod nvme_fabrics 00:21:46.561 rmmod nvme_keyring 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1777173 ']' 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1777173 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1777173 ']' 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1777173 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777173 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777173' 00:21:46.561 killing process with pid 1777173 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1777173 00:21:46.561 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1777173 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.128 12:56:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.031 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.031 00:21:49.031 real 0m15.454s 00:21:49.031 user 0m35.263s 00:21:49.031 sys 0m5.711s 00:21:49.031 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.031 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.031 ************************************ 00:21:49.031 END TEST nvmf_shutdown_tc1 00:21:49.032 ************************************ 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:49.032 ************************************ 00:21:49.032 START TEST nvmf_shutdown_tc2 00:21:49.032 ************************************ 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:49.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.032 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:49.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:49.291 Found net devices under 0000:86:00.0: cvl_0_0 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:49.291 Found net devices under 0000:86:00.1: cvl_0_1 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.291 12:56:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:49.291 00:21:49.291 --- 10.0.0.2 ping statistics --- 00:21:49.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.291 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:49.291 00:21:49.291 --- 10.0.0.1 ping statistics --- 00:21:49.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.291 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.291 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1778964 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1778964 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1778964 ']' 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.549 12:56:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:49.549 [2024-07-15 12:56:20.316134] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:49.549 [2024-07-15 12:56:20.316174] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.549 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.549 [2024-07-15 12:56:20.373690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.549 [2024-07-15 12:56:20.451772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.549 [2024-07-15 12:56:20.451810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.549 [2024-07-15 12:56:20.451817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.550 [2024-07-15 12:56:20.451823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.550 [2024-07-15 12:56:20.451829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.550 [2024-07-15 12:56:20.451893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.550 [2024-07-15 12:56:20.452000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.550 [2024-07-15 12:56:20.452108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.550 [2024-07-15 12:56:20.452109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.482 [2024-07-15 12:56:21.169144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.482 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.482 Malloc1 00:21:50.482 [2024-07-15 12:56:21.265258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.482 Malloc2 00:21:50.482 Malloc3 00:21:50.482 Malloc4 00:21:50.482 Malloc5 00:21:50.740 Malloc6 00:21:50.740 Malloc7 00:21:50.740 Malloc8 00:21:50.740 Malloc9 00:21:50.740 Malloc10 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1779237 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1779237 /var/tmp/bdevperf.sock 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1779237 ']' 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:50.740 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:50.741 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.741 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.741 { 00:21:50.741 "params": { 00:21:50.741 "name": "Nvme$subsystem", 00:21:50.741 "trtype": "$TEST_TRANSPORT", 00:21:50.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.741 "adrfam": "ipv4", 00:21:50.741 "trsvcid": "$NVMF_PORT", 00:21:50.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.741 "hdgst": ${hdgst:-false}, 00:21:50.741 "ddgst": ${ddgst:-false} 00:21:50.741 }, 00:21:50.741 "method": "bdev_nvme_attach_controller" 00:21:50.741 } 00:21:50.741 EOF 00:21:50.741 )") 00:21:50.741 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 [2024-07-15 12:56:21.730530] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:50.999 [2024-07-15 12:56:21.730576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779237 ] 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.999 { 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme$subsystem", 00:21:50.999 "trtype": "$TEST_TRANSPORT", 00:21:50.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "$NVMF_PORT", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.999 "hdgst": ${hdgst:-false}, 00:21:50.999 "ddgst": ${ddgst:-false} 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 } 00:21:50.999 EOF 00:21:50.999 )") 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:50.999 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:50.999 12:56:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme1", 00:21:50.999 "trtype": "tcp", 00:21:50.999 "traddr": "10.0.0.2", 00:21:50.999 "adrfam": "ipv4", 00:21:50.999 "trsvcid": "4420", 00:21:50.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.999 "hdgst": false, 00:21:50.999 "ddgst": false 00:21:50.999 }, 00:21:50.999 "method": "bdev_nvme_attach_controller" 00:21:50.999 },{ 00:21:50.999 "params": { 00:21:50.999 "name": "Nvme2", 00:21:50.999 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme3", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme4", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme5", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme6", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme7", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme8", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme9", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 },{ 00:21:51.000 "params": { 00:21:51.000 "name": "Nvme10", 00:21:51.000 "trtype": "tcp", 00:21:51.000 "traddr": "10.0.0.2", 00:21:51.000 "adrfam": "ipv4", 00:21:51.000 "trsvcid": "4420", 00:21:51.000 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:51.000 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:51.000 "hdgst": false, 00:21:51.000 "ddgst": false 00:21:51.000 }, 00:21:51.000 "method": "bdev_nvme_attach_controller" 00:21:51.000 }' 00:21:51.000 [2024-07-15 12:56:21.800118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.000 [2024-07-15 12:56:21.873849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.911 Running I/O for 10 seconds... 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:52.911 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:53.168 12:56:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:53.426 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1779237 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1779237 ']' 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1779237 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779237 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779237' 00:21:53.427 killing process with pid 1779237 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1779237 00:21:53.427 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1779237 00:21:53.427 Received shutdown signal, test time was about 0.919777 seconds 00:21:53.427 00:21:53.427 Latency(us) 00:21:53.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.427 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme1n1 : 0.90 284.10 17.76 0.00 0.00 222662.79 15728.64 210627.01 00:21:53.427 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme2n1 : 0.92 272.96 17.06 0.00 0.00 226926.44 19261.89 217921.45 00:21:53.427 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme3n1 : 0.90 284.62 17.79 0.00 0.00 214600.35 14246.96 215186.03 00:21:53.427 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme4n1 : 0.90 285.97 17.87 0.00 0.00 209483.69 16184.54 207891.59 00:21:53.427 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme5n1 : 0.92 278.53 17.41 0.00 0.00 211539.03 16754.42 225215.89 00:21:53.427 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme6n1 : 0.91 281.56 17.60 0.00 0.00 205030.40 19033.93 198773.54 00:21:53.427 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme7n1 : 0.88 296.58 18.54 0.00 0.00 189403.73 7180.47 217921.45 00:21:53.427 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme8n1 : 0.91 282.17 17.64 0.00 0.00 196734.89 29177.77 201508.95 00:21:53.427 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme9n1 : 0.91 280.45 17.53 0.00 0.00 194056.24 16640.45 217009.64 00:21:53.427 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.427 Verification LBA range: start 0x0 length 0x400 00:21:53.427 Nvme10n1 : 0.89 216.59 13.54 0.00 0.00 245059.23 18578.03 240716.58 00:21:53.427 =================================================================================================================== 00:21:53.427 Total : 2763.54 172.72 0.00 0.00 210608.76 7180.47 240716.58 00:21:53.685 12:56:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:54.618 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1778964 00:21:54.618 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:54.618 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:54.618 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:54.618 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.876 rmmod nvme_tcp 00:21:54.876 rmmod nvme_fabrics 00:21:54.876 rmmod nvme_keyring 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1778964 ']' 00:21:54.876 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1778964 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1778964 ']' 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1778964 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1778964 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1778964' 00:21:54.877 killing process with pid 1778964 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1778964 00:21:54.877 12:56:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1778964 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.135 12:56:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.669 00:21:57.669 real 0m8.173s 00:21:57.669 user 0m25.179s 00:21:57.669 sys 0m1.340s 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.669 ************************************ 00:21:57.669 END TEST nvmf_shutdown_tc2 00:21:57.669 ************************************ 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.669 ************************************ 00:21:57.669 START TEST nvmf_shutdown_tc3 00:21:57.669 ************************************ 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.669 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.670 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.670 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.670 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.670 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:21:57.670 00:21:57.670 --- 10.0.0.2 ping statistics --- 00:21:57.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.670 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:21:57.670 00:21:57.670 --- 10.0.0.1 ping statistics --- 00:21:57.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.670 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.670 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1780506 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1780506 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1780506 ']' 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.671 12:56:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:57.671 [2024-07-15 12:56:28.567857] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:57.671 [2024-07-15 12:56:28.567902] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.671 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.929 [2024-07-15 12:56:28.634912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.929 [2024-07-15 12:56:28.713452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.929 [2024-07-15 12:56:28.713487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.929 [2024-07-15 12:56:28.713494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.929 [2024-07-15 12:56:28.713500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.929 [2024-07-15 12:56:28.713504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.929 [2024-07-15 12:56:28.713613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.929 [2024-07-15 12:56:28.713722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.929 [2024-07-15 12:56:28.713805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.929 [2024-07-15 12:56:28.713806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.493 [2024-07-15 12:56:29.405108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.493 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.751 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:58.751 Malloc1 00:21:58.751 [2024-07-15 12:56:29.501181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.751 Malloc2 00:21:58.751 Malloc3 00:21:58.751 Malloc4 00:21:58.751 Malloc5 00:21:58.751 Malloc6 00:21:59.008 Malloc7 00:21:59.008 Malloc8 00:21:59.008 Malloc9 00:21:59.008 Malloc10 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1780781 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1780781 /var/tmp/bdevperf.sock 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1780781 ']' 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.008 { 00:21:59.008 "params": { 00:21:59.008 "name": "Nvme$subsystem", 00:21:59.008 "trtype": "$TEST_TRANSPORT", 00:21:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.008 "adrfam": "ipv4", 00:21:59.008 "trsvcid": "$NVMF_PORT", 00:21:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.008 "hdgst": ${hdgst:-false}, 00:21:59.008 "ddgst": ${ddgst:-false} 00:21:59.008 }, 00:21:59.008 "method": "bdev_nvme_attach_controller" 00:21:59.008 } 00:21:59.008 EOF 00:21:59.008 )") 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.008 { 00:21:59.008 "params": { 00:21:59.008 "name": "Nvme$subsystem", 00:21:59.008 "trtype": "$TEST_TRANSPORT", 00:21:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.008 "adrfam": "ipv4", 00:21:59.008 "trsvcid": "$NVMF_PORT", 00:21:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.008 "hdgst": ${hdgst:-false}, 00:21:59.008 "ddgst": ${ddgst:-false} 00:21:59.008 }, 00:21:59.008 "method": "bdev_nvme_attach_controller" 00:21:59.008 } 00:21:59.008 EOF 00:21:59.008 )") 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.008 { 00:21:59.008 "params": { 00:21:59.008 "name": "Nvme$subsystem", 00:21:59.008 "trtype": "$TEST_TRANSPORT", 00:21:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.008 "adrfam": "ipv4", 00:21:59.008 "trsvcid": "$NVMF_PORT", 00:21:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.008 "hdgst": ${hdgst:-false}, 00:21:59.008 "ddgst": ${ddgst:-false} 00:21:59.008 }, 00:21:59.008 "method": "bdev_nvme_attach_controller" 00:21:59.008 } 00:21:59.008 EOF 00:21:59.008 )") 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.008 { 00:21:59.008 "params": { 00:21:59.008 "name": "Nvme$subsystem", 00:21:59.008 "trtype": "$TEST_TRANSPORT", 00:21:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.008 "adrfam": "ipv4", 00:21:59.008 "trsvcid": "$NVMF_PORT", 00:21:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.008 "hdgst": ${hdgst:-false}, 00:21:59.008 "ddgst": ${ddgst:-false} 00:21:59.008 }, 00:21:59.008 "method": "bdev_nvme_attach_controller" 00:21:59.008 } 00:21:59.008 EOF 00:21:59.008 )") 00:21:59.008 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.009 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.009 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.009 { 00:21:59.009 "params": { 00:21:59.009 "name": "Nvme$subsystem", 00:21:59.009 "trtype": "$TEST_TRANSPORT", 00:21:59.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.009 "adrfam": "ipv4", 00:21:59.009 "trsvcid": "$NVMF_PORT", 00:21:59.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.009 "hdgst": ${hdgst:-false}, 00:21:59.009 "ddgst": ${ddgst:-false} 00:21:59.009 }, 00:21:59.009 "method": "bdev_nvme_attach_controller" 00:21:59.009 } 00:21:59.009 EOF 00:21:59.009 )") 00:21:59.009 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.266 { 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme$subsystem", 00:21:59.266 "trtype": "$TEST_TRANSPORT", 00:21:59.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "$NVMF_PORT", 00:21:59.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.266 "hdgst": ${hdgst:-false}, 00:21:59.266 "ddgst": ${ddgst:-false} 00:21:59.266 }, 00:21:59.266 "method": "bdev_nvme_attach_controller" 00:21:59.266 } 00:21:59.266 EOF 00:21:59.266 )") 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.266 { 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme$subsystem", 00:21:59.266 "trtype": "$TEST_TRANSPORT", 00:21:59.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "$NVMF_PORT", 00:21:59.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.266 "hdgst": ${hdgst:-false}, 00:21:59.266 "ddgst": ${ddgst:-false} 00:21:59.266 }, 00:21:59.266 "method": "bdev_nvme_attach_controller" 00:21:59.266 } 00:21:59.266 EOF 00:21:59.266 )") 00:21:59.266 [2024-07-15 12:56:29.972550] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:59.266 [2024-07-15 12:56:29.972598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780781 ] 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.266 { 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme$subsystem", 00:21:59.266 "trtype": "$TEST_TRANSPORT", 00:21:59.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "$NVMF_PORT", 00:21:59.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.266 "hdgst": ${hdgst:-false}, 00:21:59.266 "ddgst": ${ddgst:-false} 00:21:59.266 }, 00:21:59.266 "method": "bdev_nvme_attach_controller" 00:21:59.266 } 00:21:59.266 EOF 00:21:59.266 )") 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.266 { 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme$subsystem", 00:21:59.266 "trtype": "$TEST_TRANSPORT", 00:21:59.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "$NVMF_PORT", 00:21:59.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.266 "hdgst": ${hdgst:-false}, 00:21:59.266 "ddgst": ${ddgst:-false} 00:21:59.266 }, 00:21:59.266 "method": "bdev_nvme_attach_controller" 00:21:59.266 } 00:21:59.266 EOF 00:21:59.266 )") 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.266 { 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme$subsystem", 00:21:59.266 "trtype": "$TEST_TRANSPORT", 00:21:59.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "$NVMF_PORT", 00:21:59.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.266 "hdgst": ${hdgst:-false}, 00:21:59.266 "ddgst": ${ddgst:-false} 00:21:59.266 }, 00:21:59.266 "method": "bdev_nvme_attach_controller" 00:21:59.266 } 00:21:59.266 EOF 00:21:59.266 )") 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:59.266 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:59.266 12:56:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme1", 00:21:59.266 "trtype": "tcp", 00:21:59.266 "traddr": "10.0.0.2", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "4420", 00:21:59.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.266 "hdgst": false, 00:21:59.266 "ddgst": false 00:21:59.266 }, 00:21:59.266 "method": "bdev_nvme_attach_controller" 00:21:59.266 },{ 00:21:59.266 "params": { 00:21:59.266 "name": "Nvme2", 00:21:59.266 "trtype": "tcp", 00:21:59.266 "traddr": "10.0.0.2", 00:21:59.266 "adrfam": "ipv4", 00:21:59.266 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme3", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme4", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme5", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme6", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme7", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme8", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme9", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 },{ 00:21:59.267 "params": { 00:21:59.267 "name": "Nvme10", 00:21:59.267 "trtype": "tcp", 00:21:59.267 "traddr": "10.0.0.2", 00:21:59.267 "adrfam": "ipv4", 00:21:59.267 "trsvcid": "4420", 00:21:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.267 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.267 "hdgst": false, 00:21:59.267 "ddgst": false 00:21:59.267 }, 00:21:59.267 "method": "bdev_nvme_attach_controller" 00:21:59.267 }' 00:21:59.267 [2024-07-15 12:56:30.040672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.267 [2024-07-15 12:56:30.121378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.162 Running I/O for 10 seconds... 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:01.162 12:56:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=86 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 86 -ge 100 ']' 00:22:01.421 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1780506 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1780506 ']' 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1780506 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1780506 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1780506' 00:22:01.681 killing process with pid 1780506 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1780506 00:22:01.681 12:56:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1780506 00:22:01.681 [2024-07-15 12:56:32.622506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.681 [2024-07-15 12:56:32.622689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.622949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b430 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.682 [2024-07-15 12:56:32.624847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.624939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7de30 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626565] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.683 [2024-07-15 12:56:32.626617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.626996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.627002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.627009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b8d0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.628680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.683 [2024-07-15 12:56:32.628705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.683 [2024-07-15 12:56:32.628713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.683 [2024-07-15 12:56:32.628721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.683 [2024-07-15 12:56:32.628728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.683 [2024-07-15 12:56:32.628736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.683 [2024-07-15 12:56:32.628744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.683 [2024-07-15 12:56:32.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.683 [2024-07-15 12:56:32.628758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11848b0 is same with the state(5) to be set 00:22:01.683 [2024-07-15 12:56:32.628799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.683 [2024-07-15 12:56:32.628808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.683 [2024-07-15 12:56:32.628815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b8d0 is same with the state(5) to be set 00:22:01.684 [2024-07-15 12:56:32.628888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.684 [2024-07-15 12:56:32.628937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.684 [2024-07-15 12:56:32.628944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcfc70 is same with the state(5) to be set 00:22:01.684 [2024-07-15 12:56:32.629526] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.684 [2024-07-15 12:56:32.630881] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.684 [2024-07-15 12:56:32.633383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7bd70 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.638999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.960 [2024-07-15 12:56:32.639181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c230 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.639992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.640448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7c6d0 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.961 [2024-07-15 12:56:32.641352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.641687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7cb90 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.962 [2024-07-15 12:56:32.642941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.642998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d030 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.643996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d4d0 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d970 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d970 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d970 is same with the state(5) to be set 00:22:01.963 [2024-07-15 12:56:32.644868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d970 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.644874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d970 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.644882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7d970 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1013b30 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e340 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c1d0 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4050 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11848b0 (9): Bad file descriptor 00:22:01.964 [2024-07-15 12:56:32.647795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b0d0 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b8d0 (9): Bad file descriptor 00:22:01.964 [2024-07-15 12:56:32.647896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.647955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1016bf0 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.647969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcfc70 (9): Bad file descriptor 00:22:01.964 [2024-07-15 12:56:32.647990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.647999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.648007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.648013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.648022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.648029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.648036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.964 [2024-07-15 12:56:32.648043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-15 12:56:32.648050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff2190 is same with the state(5) to be set 00:22:01.964 [2024-07-15 12:56:32.666536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-15 12:56:32.666584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.666987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.666994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-15 12:56:32.667264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-15 12:56:32.667273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667666] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10f8a60 was disconnected and freed. reset controller. 00:22:01.966 [2024-07-15 12:56:32.667732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1013b30 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e340 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100c1d0 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4050 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b0d0 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016bf0 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff2190 (9): Bad file descriptor 00:22:01.966 [2024-07-15 12:56:32.667887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.667989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.667996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-15 12:56:32.668174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-15 12:56:32.668183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-15 12:56:32.668853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-15 12:56:32.668859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.668868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.668874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.668883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.668889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.668897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.668904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.668911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1098920 is same with the state(5) to be set 00:22:01.968 [2024-07-15 12:56:32.668964] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1098920 was disconnected and freed. reset controller. 00:22:01.968 [2024-07-15 12:56:32.670173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-15 12:56:32.670691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-15 12:56:32.670699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.670988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.670995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.671191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.671200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110ea0 is same with the state(5) to be set 00:22:01.969 [2024-07-15 12:56:32.672227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-15 12:56:32.672386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-15 12:56:32.672394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.672697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.672704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.970 [2024-07-15 12:56:32.681676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.970 [2024-07-15 12:56:32.681687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.681890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.681900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1097490 is same with the state(5) to be set 00:22:01.971 [2024-07-15 12:56:32.684494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.684984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.684994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.685003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.685012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.685021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.685032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.685040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.685050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.685059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.685070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.685078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.971 [2024-07-15 12:56:32.685097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.971 [2024-07-15 12:56:32.685106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.685717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.685729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f9ef0 is same with the state(5) to be set 00:22:01.972 [2024-07-15 12:56:32.687253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:01.972 [2024-07-15 12:56:32.687278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.972 [2024-07-15 12:56:32.687289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:01.972 [2024-07-15 12:56:32.687365] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.972 [2024-07-15 12:56:32.687379] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.972 [2024-07-15 12:56:32.687478] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.972 [2024-07-15 12:56:32.687548] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.972 [2024-07-15 12:56:32.687596] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.972 [2024-07-15 12:56:32.687937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:01.972 [2024-07-15 12:56:32.687954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:01.972 [2024-07-15 12:56:32.688138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.972 [2024-07-15 12:56:32.688155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119b0d0 with addr=10.0.0.2, port=4420 00:22:01.972 [2024-07-15 12:56:32.688165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b0d0 is same with the state(5) to be set 00:22:01.972 [2024-07-15 12:56:32.688407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.972 [2024-07-15 12:56:32.688420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcfc70 with addr=10.0.0.2, port=4420 00:22:01.972 [2024-07-15 12:56:32.688429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcfc70 is same with the state(5) to be set 00:22:01.972 [2024-07-15 12:56:32.688579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.972 [2024-07-15 12:56:32.688591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119b8d0 with addr=10.0.0.2, port=4420 00:22:01.972 [2024-07-15 12:56:32.688600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b8d0 is same with the state(5) to be set 00:22:01.972 [2024-07-15 12:56:32.689411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.972 [2024-07-15 12:56:32.689426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.972 [2024-07-15 12:56:32.689440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.689990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.689999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.690009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.690016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.690027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.690035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.690045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.973 [2024-07-15 12:56:32.690053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.973 [2024-07-15 12:56:32.690063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.690612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.690621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc9b70 is same with the state(5) to be set 00:22:01.974 [2024-07-15 12:56:32.691778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.691986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.691996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.692005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.974 [2024-07-15 12:56:32.692015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.974 [2024-07-15 12:56:32.692023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.975 [2024-07-15 12:56:32.692838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.975 [2024-07-15 12:56:32.692846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.692987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.692996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.976 [2024-07-15 12:56:32.694827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.976 [2024-07-15 12:56:32.694838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.694988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.694995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.695402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.695412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1111910 is same with the state(5) to be set 00:22:01.977 [2024-07-15 12:56:32.696603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.696618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.696631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.696640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.696651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.977 [2024-07-15 12:56:32.696661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.977 [2024-07-15 12:56:32.696671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.696988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.696997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.978 [2024-07-15 12:56:32.697500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.978 [2024-07-15 12:56:32.697509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.697831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.697839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.699252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:01.979 [2024-07-15 12:56:32.699270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:01.979 [2024-07-15 12:56:32.699280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:01.979 [2024-07-15 12:56:32.699568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.979 [2024-07-15 12:56:32.699583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4050 with addr=10.0.0.2, port=4420 00:22:01.979 [2024-07-15 12:56:32.699592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4050 is same with the state(5) to be set 00:22:01.979 [2024-07-15 12:56:32.699802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.979 [2024-07-15 12:56:32.699813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11848b0 with addr=10.0.0.2, port=4420 00:22:01.979 [2024-07-15 12:56:32.699820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11848b0 is same with the state(5) to be set 00:22:01.979 [2024-07-15 12:56:32.699832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b0d0 (9): Bad file descriptor 00:22:01.979 [2024-07-15 12:56:32.699843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcfc70 (9): Bad file descriptor 00:22:01.979 [2024-07-15 12:56:32.699854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b8d0 (9): Bad file descriptor 00:22:01.979 [2024-07-15 12:56:32.699886] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.979 [2024-07-15 12:56:32.699900] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.979 [2024-07-15 12:56:32.699912] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.979 [2024-07-15 12:56:32.699924] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.979 [2024-07-15 12:56:32.699934] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.979 [2024-07-15 12:56:32.699945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11848b0 (9): Bad file descriptor 00:22:01.979 [2024-07-15 12:56:32.699957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4050 (9): Bad file descriptor 00:22:01.979 [2024-07-15 12:56:32.700051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.979 [2024-07-15 12:56:32.700336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.979 [2024-07-15 12:56:32.700345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.700985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.700992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.701001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.701008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.701017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.701025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.701033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.980 [2024-07-15 12:56:32.701040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.980 [2024-07-15 12:56:32.701050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.981 [2024-07-15 12:56:32.701057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.981 [2024-07-15 12:56:32.701066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.981 [2024-07-15 12:56:32.701073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.981 [2024-07-15 12:56:32.701082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.981 [2024-07-15 12:56:32.701089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.981 [2024-07-15 12:56:32.701098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.981 [2024-07-15 12:56:32.701106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.981 [2024-07-15 12:56:32.701114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.981 [2024-07-15 12:56:32.701122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.981 [2024-07-15 12:56:32.701130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1112de0 is same with the state(5) to be set 00:22:01.981 [2024-07-15 12:56:32.702976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:01.981 task offset: 24576 on job bdev=Nvme9n1 fails 00:22:01.981 00:22:01.981 Latency(us) 00:22:01.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme1n1 ended in about 0.91 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme1n1 : 0.91 211.30 13.21 70.43 0.00 224939.85 16640.45 214274.23 00:22:01.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme2n1 ended in about 0.92 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme2n1 : 0.92 214.27 13.39 69.61 0.00 219372.89 10884.67 217009.64 00:22:01.981 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme3n1 ended in about 0.92 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme3n1 : 0.92 278.10 17.38 69.52 0.00 175928.90 15272.74 201508.95 00:22:01.981 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme4n1 ended in about 0.93 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme4n1 : 0.93 212.27 13.27 68.96 0.00 213686.88 17210.32 223392.28 00:22:01.981 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme5n1 ended in about 0.93 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme5n1 : 0.93 206.35 12.90 68.78 0.00 214513.98 17096.35 217921.45 00:22:01.981 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme6n1 ended in about 0.93 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme6n1 : 0.93 205.82 12.86 68.61 0.00 211179.97 15842.62 217921.45 00:22:01.981 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme7n1 ended in about 0.94 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme7n1 : 0.94 208.84 13.05 68.19 0.00 205443.47 7038.00 215186.03 00:22:01.981 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme8n1 ended in about 0.94 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme8n1 : 0.94 205.29 12.83 68.43 0.00 203858.14 14189.97 223392.28 00:22:01.981 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme9n1 ended in about 0.91 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme9n1 : 0.91 211.77 13.24 70.59 0.00 192863.72 18008.15 244363.80 00:22:01.981 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.981 Job: Nvme10n1 ended in about 0.92 seconds with error 00:22:01.981 Verification LBA range: start 0x0 length 0x400 00:22:01.981 Nvme10n1 : 0.92 138.65 8.67 69.32 0.00 257520.27 19717.79 269894.34 00:22:01.981 =================================================================================================================== 00:22:01.981 Total : 2092.64 130.79 692.45 0.00 209909.90 7038.00 269894.34 00:22:01.981 [2024-07-15 12:56:32.729346] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:01.981 [2024-07-15 12:56:32.729387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:01.981 [2024-07-15 12:56:32.729714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.981 [2024-07-15 12:56:32.729735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100c1d0 with addr=10.0.0.2, port=4420 00:22:01.981 [2024-07-15 12:56:32.729746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c1d0 is same with the state(5) to be set 00:22:01.981 [2024-07-15 12:56:32.729991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.981 [2024-07-15 12:56:32.730004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1e340 with addr=10.0.0.2, port=4420 00:22:01.981 [2024-07-15 12:56:32.730017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e340 is same with the state(5) to be set 00:22:01.981 [2024-07-15 12:56:32.730234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.981 [2024-07-15 12:56:32.730247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1013b30 with addr=10.0.0.2, port=4420 00:22:01.981 [2024-07-15 12:56:32.730257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1013b30 is same with the state(5) to be set 00:22:01.981 [2024-07-15 12:56:32.730268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:01.981 [2024-07-15 12:56:32.730276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:01.981 [2024-07-15 12:56:32.730286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:01.981 [2024-07-15 12:56:32.730301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.981 [2024-07-15 12:56:32.730309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.981 [2024-07-15 12:56:32.730317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.981 [2024-07-15 12:56:32.730331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:01.981 [2024-07-15 12:56:32.730338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:01.981 [2024-07-15 12:56:32.730346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:01.981 [2024-07-15 12:56:32.731419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.981 [2024-07-15 12:56:32.731434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.981 [2024-07-15 12:56:32.731442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.981 [2024-07-15 12:56:32.731645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.981 [2024-07-15 12:56:32.731660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1016bf0 with addr=10.0.0.2, port=4420 00:22:01.981 [2024-07-15 12:56:32.731669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1016bf0 is same with the state(5) to be set 00:22:01.981 [2024-07-15 12:56:32.731861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.981 [2024-07-15 12:56:32.731873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff2190 with addr=10.0.0.2, port=4420 00:22:01.981 [2024-07-15 12:56:32.731882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff2190 is same with the state(5) to be set 00:22:01.981 [2024-07-15 12:56:32.731895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100c1d0 (9): Bad file descriptor 00:22:01.981 [2024-07-15 12:56:32.731908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e340 (9): Bad file descriptor 00:22:01.981 [2024-07-15 12:56:32.731918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1013b30 (9): Bad file descriptor 00:22:01.981 [2024-07-15 12:56:32.731928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:01.981 [2024-07-15 12:56:32.731935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:01.981 [2024-07-15 12:56:32.731942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:01.981 [2024-07-15 12:56:32.731957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:01.981 [2024-07-15 12:56:32.731968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:01.981 [2024-07-15 12:56:32.731977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:01.981 [2024-07-15 12:56:32.732024] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.981 [2024-07-15 12:56:32.732037] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.981 [2024-07-15 12:56:32.732049] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.982 [2024-07-15 12:56:32.732060] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.982 [2024-07-15 12:56:32.732072] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.982 [2024-07-15 12:56:32.732391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016bf0 (9): Bad file descriptor 00:22:01.982 [2024-07-15 12:56:32.732435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff2190 (9): Bad file descriptor 00:22:01.982 [2024-07-15 12:56:32.732445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.732453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.732460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:01.982 [2024-07-15 12:56:32.732470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.732478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.732486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:01.982 [2024-07-15 12:56:32.732496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.732502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.732510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:01.982 [2024-07-15 12:56:32.732571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:01.982 [2024-07-15 12:56:32.732583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.982 [2024-07-15 12:56:32.732592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:01.982 [2024-07-15 12:56:32.732601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.732646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.732653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:01.982 [2024-07-15 12:56:32.732663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.732670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.732681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:01.982 [2024-07-15 12:56:32.732712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.732888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.982 [2024-07-15 12:56:32.732901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119b8d0 with addr=10.0.0.2, port=4420 00:22:01.982 [2024-07-15 12:56:32.732909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b8d0 is same with the state(5) to be set 00:22:01.982 [2024-07-15 12:56:32.733013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.982 [2024-07-15 12:56:32.733024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcfc70 with addr=10.0.0.2, port=4420 00:22:01.982 [2024-07-15 12:56:32.733032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcfc70 is same with the state(5) to be set 00:22:01.982 [2024-07-15 12:56:32.733193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.982 [2024-07-15 12:56:32.733205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119b0d0 with addr=10.0.0.2, port=4420 00:22:01.982 [2024-07-15 12:56:32.733213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b0d0 is same with the state(5) to be set 00:22:01.982 [2024-07-15 12:56:32.733250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b8d0 (9): Bad file descriptor 00:22:01.982 [2024-07-15 12:56:32.733263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcfc70 (9): Bad file descriptor 00:22:01.982 [2024-07-15 12:56:32.733273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119b0d0 (9): Bad file descriptor 00:22:01.982 [2024-07-15 12:56:32.733301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.733310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.733318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:01.982 [2024-07-15 12:56:32.733329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.733336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.733342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.982 [2024-07-15 12:56:32.733351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:01.982 [2024-07-15 12:56:32.733359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:01.982 [2024-07-15 12:56:32.733366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:01.982 [2024-07-15 12:56:32.733392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.733401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.982 [2024-07-15 12:56:32.733407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.262 12:56:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:02.263 12:56:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1780781 00:22:03.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1780781) - No such process 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.201 rmmod nvme_tcp 00:22:03.201 rmmod nvme_fabrics 00:22:03.201 rmmod nvme_keyring 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.201 12:56:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.739 00:22:05.739 real 0m7.997s 00:22:05.739 user 0m20.144s 00:22:05.739 sys 0m1.309s 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.739 ************************************ 00:22:05.739 END TEST nvmf_shutdown_tc3 00:22:05.739 ************************************ 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:05.739 00:22:05.739 real 0m31.954s 00:22:05.739 user 1m20.733s 00:22:05.739 sys 0m8.567s 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.739 12:56:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.739 ************************************ 00:22:05.739 END TEST nvmf_shutdown 00:22:05.739 ************************************ 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:05.739 12:56:36 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.739 12:56:36 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.739 12:56:36 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:05.739 12:56:36 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.739 12:56:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.739 ************************************ 00:22:05.739 START TEST nvmf_multicontroller 00:22:05.739 ************************************ 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:05.739 * Looking for test storage... 00:22:05.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.739 12:56:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.019 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.019 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.019 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.279 12:56:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:22:11.279 00:22:11.279 --- 10.0.0.2 ping statistics --- 00:22:11.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.279 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:22:11.279 00:22:11.279 --- 10.0.0.1 ping statistics --- 00:22:11.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.279 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.279 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1784837 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1784837 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1784837 ']' 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.538 12:56:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.538 [2024-07-15 12:56:42.292721] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:11.539 [2024-07-15 12:56:42.292762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.539 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.539 [2024-07-15 12:56:42.367125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:11.539 [2024-07-15 12:56:42.445251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.539 [2024-07-15 12:56:42.445292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.539 [2024-07-15 12:56:42.445299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.539 [2024-07-15 12:56:42.445305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.539 [2024-07-15 12:56:42.445310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.539 [2024-07-15 12:56:42.445424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.539 [2024-07-15 12:56:42.445442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.539 [2024-07-15 12:56:42.445452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 [2024-07-15 12:56:43.146582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 Malloc0 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 [2024-07-15 12:56:43.208515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 [2024-07-15 12:56:43.216453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 Malloc1 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1785082 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1785082 /var/tmp/bdevperf.sock 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1785082 ']' 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.475 12:56:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.413 NVMe0n1 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.413 1 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.413 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.673 request: 00:22:13.673 { 00:22:13.673 "name": "NVMe0", 00:22:13.673 "trtype": "tcp", 00:22:13.673 "traddr": "10.0.0.2", 00:22:13.673 "adrfam": "ipv4", 00:22:13.673 "trsvcid": "4420", 00:22:13.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.673 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:13.673 "hostaddr": "10.0.0.2", 00:22:13.673 "hostsvcid": "60000", 00:22:13.673 "prchk_reftag": false, 00:22:13.673 "prchk_guard": false, 00:22:13.673 "hdgst": false, 00:22:13.673 "ddgst": false, 00:22:13.673 "method": "bdev_nvme_attach_controller", 00:22:13.673 "req_id": 1 00:22:13.673 } 00:22:13.673 Got JSON-RPC error response 00:22:13.673 response: 00:22:13.673 { 00:22:13.673 "code": -114, 00:22:13.673 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:13.673 } 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.673 request: 00:22:13.673 { 00:22:13.673 "name": "NVMe0", 00:22:13.673 "trtype": "tcp", 00:22:13.673 "traddr": "10.0.0.2", 00:22:13.673 "adrfam": "ipv4", 00:22:13.673 "trsvcid": "4420", 00:22:13.673 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:13.673 "hostaddr": "10.0.0.2", 00:22:13.673 "hostsvcid": "60000", 00:22:13.673 "prchk_reftag": false, 00:22:13.673 "prchk_guard": false, 00:22:13.673 "hdgst": false, 00:22:13.673 "ddgst": false, 00:22:13.673 "method": "bdev_nvme_attach_controller", 00:22:13.673 "req_id": 1 00:22:13.673 } 00:22:13.673 Got JSON-RPC error response 00:22:13.673 response: 00:22:13.673 { 00:22:13.673 "code": -114, 00:22:13.673 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:13.673 } 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.673 request: 00:22:13.673 { 00:22:13.673 "name": "NVMe0", 00:22:13.673 "trtype": "tcp", 00:22:13.673 "traddr": "10.0.0.2", 00:22:13.673 "adrfam": "ipv4", 00:22:13.673 "trsvcid": "4420", 00:22:13.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.673 "hostaddr": "10.0.0.2", 00:22:13.673 "hostsvcid": "60000", 00:22:13.673 "prchk_reftag": false, 00:22:13.673 "prchk_guard": false, 00:22:13.673 "hdgst": false, 00:22:13.673 "ddgst": false, 00:22:13.673 "multipath": "disable", 00:22:13.673 "method": "bdev_nvme_attach_controller", 00:22:13.673 "req_id": 1 00:22:13.673 } 00:22:13.673 Got JSON-RPC error response 00:22:13.673 response: 00:22:13.673 { 00:22:13.673 "code": -114, 00:22:13.673 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:13.673 } 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.673 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.673 request: 00:22:13.673 { 00:22:13.673 "name": "NVMe0", 00:22:13.673 "trtype": "tcp", 00:22:13.673 "traddr": "10.0.0.2", 00:22:13.673 "adrfam": "ipv4", 00:22:13.673 "trsvcid": "4420", 00:22:13.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.673 "hostaddr": "10.0.0.2", 00:22:13.673 "hostsvcid": "60000", 00:22:13.673 "prchk_reftag": false, 00:22:13.673 "prchk_guard": false, 00:22:13.673 "hdgst": false, 00:22:13.673 "ddgst": false, 00:22:13.673 "multipath": "failover", 00:22:13.673 "method": "bdev_nvme_attach_controller", 00:22:13.673 "req_id": 1 00:22:13.673 } 00:22:13.673 Got JSON-RPC error response 00:22:13.673 response: 00:22:13.673 { 00:22:13.673 "code": -114, 00:22:13.674 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:13.674 } 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.674 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.674 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.932 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:13.932 12:56:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.870 0 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1785082 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1785082 ']' 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1785082 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.870 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1785082 00:22:15.130 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.130 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.130 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1785082' 00:22:15.130 killing process with pid 1785082 00:22:15.130 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1785082 00:22:15.130 12:56:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1785082 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:15.130 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:15.130 [2024-07-15 12:56:43.319099] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:15.130 [2024-07-15 12:56:43.319144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785082 ] 00:22:15.130 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.130 [2024-07-15 12:56:43.387170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.130 [2024-07-15 12:56:43.467405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.130 [2024-07-15 12:56:44.656941] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 254a4723-54c4-4b89-8445-159849bee666 already exists 00:22:15.130 [2024-07-15 12:56:44.656970] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:254a4723-54c4-4b89-8445-159849bee666 alias for bdev NVMe1n1 00:22:15.130 [2024-07-15 12:56:44.656978] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:15.130 Running I/O for 1 seconds... 00:22:15.130 00:22:15.130 Latency(us) 00:22:15.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.130 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:15.130 NVMe0n1 : 1.01 23526.67 91.90 0.00 0.00 5423.22 4017.64 9175.04 00:22:15.130 =================================================================================================================== 00:22:15.130 Total : 23526.67 91.90 0.00 0.00 5423.22 4017.64 9175.04 00:22:15.130 Received shutdown signal, test time was about 1.000000 seconds 00:22:15.130 00:22:15.130 Latency(us) 00:22:15.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.130 =================================================================================================================== 00:22:15.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.130 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.130 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.389 rmmod nvme_tcp 00:22:15.389 rmmod nvme_fabrics 00:22:15.389 rmmod nvme_keyring 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1784837 ']' 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1784837 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1784837 ']' 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1784837 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1784837 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1784837' 00:22:15.389 killing process with pid 1784837 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1784837 00:22:15.389 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1784837 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.648 12:56:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.554 12:56:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.554 00:22:17.554 real 0m12.106s 00:22:17.554 user 0m16.512s 00:22:17.554 sys 0m5.065s 00:22:17.554 12:56:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:17.554 12:56:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:17.554 ************************************ 00:22:17.554 END TEST nvmf_multicontroller 00:22:17.554 ************************************ 00:22:17.554 12:56:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:17.554 12:56:48 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:17.554 12:56:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:17.554 12:56:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.554 12:56:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.813 ************************************ 00:22:17.813 START TEST nvmf_aer 00:22:17.813 ************************************ 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:17.813 * Looking for test storage... 00:22:17.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.813 12:56:48 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.814 12:56:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.382 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:24.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:24.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:24.383 Found net devices under 0000:86:00.0: cvl_0_0 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:24.383 Found net devices under 0000:86:00.1: cvl_0_1 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:22:24.383 00:22:24.383 --- 10.0.0.2 ping statistics --- 00:22:24.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.383 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:22:24.383 00:22:24.383 --- 10.0.0.1 ping statistics --- 00:22:24.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.383 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1789078 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1789078 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1789078 ']' 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.383 12:56:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 [2024-07-15 12:56:54.451662] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:24.383 [2024-07-15 12:56:54.451710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.383 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.383 [2024-07-15 12:56:54.521498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.383 [2024-07-15 12:56:54.601077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.383 [2024-07-15 12:56:54.601112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.383 [2024-07-15 12:56:54.601119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.383 [2024-07-15 12:56:54.601125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.383 [2024-07-15 12:56:54.601130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.383 [2024-07-15 12:56:54.601271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.383 [2024-07-15 12:56:54.601359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.383 [2024-07-15 12:56:54.601446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.383 [2024-07-15 12:56:54.601447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 [2024-07-15 12:56:55.295152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 Malloc0 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.383 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.384 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.384 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.384 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.384 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.384 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.643 [2024-07-15 12:56:55.338760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.643 [ 00:22:24.643 { 00:22:24.643 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:24.643 "subtype": "Discovery", 00:22:24.643 "listen_addresses": [], 00:22:24.643 "allow_any_host": true, 00:22:24.643 "hosts": [] 00:22:24.643 }, 00:22:24.643 { 00:22:24.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.643 "subtype": "NVMe", 00:22:24.643 "listen_addresses": [ 00:22:24.643 { 00:22:24.643 "trtype": "TCP", 00:22:24.643 "adrfam": "IPv4", 00:22:24.643 "traddr": "10.0.0.2", 00:22:24.643 "trsvcid": "4420" 00:22:24.643 } 00:22:24.643 ], 00:22:24.643 "allow_any_host": true, 00:22:24.643 "hosts": [], 00:22:24.643 "serial_number": "SPDK00000000000001", 00:22:24.643 "model_number": "SPDK bdev Controller", 00:22:24.643 "max_namespaces": 2, 00:22:24.643 "min_cntlid": 1, 00:22:24.643 "max_cntlid": 65519, 00:22:24.643 "namespaces": [ 00:22:24.643 { 00:22:24.643 "nsid": 1, 00:22:24.643 "bdev_name": "Malloc0", 00:22:24.643 "name": "Malloc0", 00:22:24.643 "nguid": "46697262D7E7438A8AADCA7EEAC2D454", 00:22:24.643 "uuid": "46697262-d7e7-438a-8aad-ca7eeac2d454" 00:22:24.643 } 00:22:24.643 ] 00:22:24.643 } 00:22:24.643 ] 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1789164 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:24.643 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.643 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.903 Malloc1 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.903 Asynchronous Event Request test 00:22:24.903 Attaching to 10.0.0.2 00:22:24.903 Attached to 10.0.0.2 00:22:24.903 Registering asynchronous event callbacks... 00:22:24.903 Starting namespace attribute notice tests for all controllers... 00:22:24.903 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:24.903 aer_cb - Changed Namespace 00:22:24.903 Cleaning up... 00:22:24.903 [ 00:22:24.903 { 00:22:24.903 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:24.903 "subtype": "Discovery", 00:22:24.903 "listen_addresses": [], 00:22:24.903 "allow_any_host": true, 00:22:24.903 "hosts": [] 00:22:24.903 }, 00:22:24.903 { 00:22:24.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.903 "subtype": "NVMe", 00:22:24.903 "listen_addresses": [ 00:22:24.903 { 00:22:24.903 "trtype": "TCP", 00:22:24.903 "adrfam": "IPv4", 00:22:24.903 "traddr": "10.0.0.2", 00:22:24.903 "trsvcid": "4420" 00:22:24.903 } 00:22:24.903 ], 00:22:24.903 "allow_any_host": true, 00:22:24.903 "hosts": [], 00:22:24.903 "serial_number": "SPDK00000000000001", 00:22:24.903 "model_number": "SPDK bdev Controller", 00:22:24.903 "max_namespaces": 2, 00:22:24.903 "min_cntlid": 1, 00:22:24.903 "max_cntlid": 65519, 00:22:24.903 "namespaces": [ 00:22:24.903 { 00:22:24.903 "nsid": 1, 00:22:24.903 "bdev_name": "Malloc0", 00:22:24.903 "name": "Malloc0", 00:22:24.903 "nguid": "46697262D7E7438A8AADCA7EEAC2D454", 00:22:24.903 "uuid": "46697262-d7e7-438a-8aad-ca7eeac2d454" 00:22:24.903 }, 00:22:24.903 { 00:22:24.903 "nsid": 2, 00:22:24.903 "bdev_name": "Malloc1", 00:22:24.903 "name": "Malloc1", 00:22:24.903 "nguid": "1C11F3B99B314042A3CB6EC99727602C", 00:22:24.903 "uuid": "1c11f3b9-9b31-4042-a3cb-6ec99727602c" 00:22:24.903 } 00:22:24.903 ] 00:22:24.903 } 00:22:24.903 ] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1789164 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:24.903 rmmod nvme_tcp 00:22:24.903 rmmod nvme_fabrics 00:22:24.903 rmmod nvme_keyring 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1789078 ']' 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1789078 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1789078 ']' 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1789078 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1789078 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1789078' 00:22:24.903 killing process with pid 1789078 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1789078 00:22:24.903 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1789078 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.162 12:56:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.728 12:56:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:27.728 00:22:27.728 real 0m9.505s 00:22:27.728 user 0m7.085s 00:22:27.728 sys 0m4.815s 00:22:27.728 12:56:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.728 12:56:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:27.728 ************************************ 00:22:27.728 END TEST nvmf_aer 00:22:27.728 ************************************ 00:22:27.728 12:56:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:27.728 12:56:58 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:27.728 12:56:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:27.728 12:56:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.728 12:56:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.728 ************************************ 00:22:27.728 START TEST nvmf_async_init 00:22:27.728 ************************************ 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:27.728 * Looking for test storage... 00:22:27.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f9c5e2f295234da2a17a0f16df397d9b 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.728 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.729 12:56:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:33.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:33.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:33.008 Found net devices under 0000:86:00.0: cvl_0_0 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.008 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:33.009 Found net devices under 0000:86:00.1: cvl_0_1 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:22:33.009 00:22:33.009 --- 10.0.0.2 ping statistics --- 00:22:33.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.009 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:22:33.009 00:22:33.009 --- 10.0.0.1 ping statistics --- 00:22:33.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.009 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.009 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1792788 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1792788 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1792788 ']' 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.269 12:57:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:33.269 [2024-07-15 12:57:04.016370] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:33.269 [2024-07-15 12:57:04.016415] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.269 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.269 [2024-07-15 12:57:04.087214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.269 [2024-07-15 12:57:04.162215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.269 [2024-07-15 12:57:04.162257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.269 [2024-07-15 12:57:04.162265] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.269 [2024-07-15 12:57:04.162271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.269 [2024-07-15 12:57:04.162276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.269 [2024-07-15 12:57:04.162299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.207 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 [2024-07-15 12:57:04.870663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 null0 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f9c5e2f295234da2a17a0f16df397d9b 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 [2024-07-15 12:57:04.914892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.208 nvme0n1 00:22:34.208 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.208 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.208 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.208 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.468 [ 00:22:34.468 { 00:22:34.468 "name": "nvme0n1", 00:22:34.468 "aliases": [ 00:22:34.468 "f9c5e2f2-9523-4da2-a17a-0f16df397d9b" 00:22:34.468 ], 00:22:34.468 "product_name": "NVMe disk", 00:22:34.468 "block_size": 512, 00:22:34.468 "num_blocks": 2097152, 00:22:34.468 "uuid": "f9c5e2f2-9523-4da2-a17a-0f16df397d9b", 00:22:34.468 "assigned_rate_limits": { 00:22:34.468 "rw_ios_per_sec": 0, 00:22:34.468 "rw_mbytes_per_sec": 0, 00:22:34.468 "r_mbytes_per_sec": 0, 00:22:34.468 "w_mbytes_per_sec": 0 00:22:34.468 }, 00:22:34.468 "claimed": false, 00:22:34.468 "zoned": false, 00:22:34.468 "supported_io_types": { 00:22:34.468 "read": true, 00:22:34.468 "write": true, 00:22:34.468 "unmap": false, 00:22:34.468 "flush": true, 00:22:34.468 "reset": true, 00:22:34.468 "nvme_admin": true, 00:22:34.468 "nvme_io": true, 00:22:34.468 "nvme_io_md": false, 00:22:34.468 "write_zeroes": true, 00:22:34.468 "zcopy": false, 00:22:34.468 "get_zone_info": false, 00:22:34.468 "zone_management": false, 00:22:34.468 "zone_append": false, 00:22:34.468 "compare": true, 00:22:34.468 "compare_and_write": true, 00:22:34.468 "abort": true, 00:22:34.468 "seek_hole": false, 00:22:34.468 "seek_data": false, 00:22:34.468 "copy": true, 00:22:34.468 "nvme_iov_md": false 00:22:34.468 }, 00:22:34.468 "memory_domains": [ 00:22:34.468 { 00:22:34.468 "dma_device_id": "system", 00:22:34.468 "dma_device_type": 1 00:22:34.468 } 00:22:34.468 ], 00:22:34.468 "driver_specific": { 00:22:34.468 "nvme": [ 00:22:34.468 { 00:22:34.468 "trid": { 00:22:34.468 "trtype": "TCP", 00:22:34.468 "adrfam": "IPv4", 00:22:34.468 "traddr": "10.0.0.2", 00:22:34.468 "trsvcid": "4420", 00:22:34.468 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.468 }, 00:22:34.468 "ctrlr_data": { 00:22:34.468 "cntlid": 1, 00:22:34.468 "vendor_id": "0x8086", 00:22:34.468 "model_number": "SPDK bdev Controller", 00:22:34.468 "serial_number": "00000000000000000000", 00:22:34.468 "firmware_revision": "24.09", 00:22:34.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.468 "oacs": { 00:22:34.468 "security": 0, 00:22:34.468 "format": 0, 00:22:34.468 "firmware": 0, 00:22:34.468 "ns_manage": 0 00:22:34.468 }, 00:22:34.468 "multi_ctrlr": true, 00:22:34.468 "ana_reporting": false 00:22:34.468 }, 00:22:34.468 "vs": { 00:22:34.468 "nvme_version": "1.3" 00:22:34.468 }, 00:22:34.468 "ns_data": { 00:22:34.468 "id": 1, 00:22:34.468 "can_share": true 00:22:34.468 } 00:22:34.468 } 00:22:34.468 ], 00:22:34.468 "mp_policy": "active_passive" 00:22:34.468 } 00:22:34.468 } 00:22:34.468 ] 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.468 [2024-07-15 12:57:05.183456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:34.468 [2024-07-15 12:57:05.183514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a3250 (9): Bad file descriptor 00:22:34.468 [2024-07-15 12:57:05.315311] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.468 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.468 [ 00:22:34.468 { 00:22:34.468 "name": "nvme0n1", 00:22:34.468 "aliases": [ 00:22:34.468 "f9c5e2f2-9523-4da2-a17a-0f16df397d9b" 00:22:34.468 ], 00:22:34.468 "product_name": "NVMe disk", 00:22:34.468 "block_size": 512, 00:22:34.468 "num_blocks": 2097152, 00:22:34.468 "uuid": "f9c5e2f2-9523-4da2-a17a-0f16df397d9b", 00:22:34.468 "assigned_rate_limits": { 00:22:34.468 "rw_ios_per_sec": 0, 00:22:34.468 "rw_mbytes_per_sec": 0, 00:22:34.468 "r_mbytes_per_sec": 0, 00:22:34.468 "w_mbytes_per_sec": 0 00:22:34.468 }, 00:22:34.468 "claimed": false, 00:22:34.468 "zoned": false, 00:22:34.468 "supported_io_types": { 00:22:34.468 "read": true, 00:22:34.468 "write": true, 00:22:34.468 "unmap": false, 00:22:34.468 "flush": true, 00:22:34.468 "reset": true, 00:22:34.468 "nvme_admin": true, 00:22:34.468 "nvme_io": true, 00:22:34.468 "nvme_io_md": false, 00:22:34.468 "write_zeroes": true, 00:22:34.468 "zcopy": false, 00:22:34.468 "get_zone_info": false, 00:22:34.468 "zone_management": false, 00:22:34.468 "zone_append": false, 00:22:34.468 "compare": true, 00:22:34.468 "compare_and_write": true, 00:22:34.468 "abort": true, 00:22:34.468 "seek_hole": false, 00:22:34.468 "seek_data": false, 00:22:34.468 "copy": true, 00:22:34.468 "nvme_iov_md": false 00:22:34.468 }, 00:22:34.468 "memory_domains": [ 00:22:34.468 { 00:22:34.468 "dma_device_id": "system", 00:22:34.468 "dma_device_type": 1 00:22:34.468 } 00:22:34.468 ], 00:22:34.468 "driver_specific": { 00:22:34.468 "nvme": [ 00:22:34.468 { 00:22:34.468 "trid": { 00:22:34.468 "trtype": "TCP", 00:22:34.468 "adrfam": "IPv4", 00:22:34.468 "traddr": "10.0.0.2", 00:22:34.468 "trsvcid": "4420", 00:22:34.469 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.469 }, 00:22:34.469 "ctrlr_data": { 00:22:34.469 "cntlid": 2, 00:22:34.469 "vendor_id": "0x8086", 00:22:34.469 "model_number": "SPDK bdev Controller", 00:22:34.469 "serial_number": "00000000000000000000", 00:22:34.469 "firmware_revision": "24.09", 00:22:34.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.469 "oacs": { 00:22:34.469 "security": 0, 00:22:34.469 "format": 0, 00:22:34.469 "firmware": 0, 00:22:34.469 "ns_manage": 0 00:22:34.469 }, 00:22:34.469 "multi_ctrlr": true, 00:22:34.469 "ana_reporting": false 00:22:34.469 }, 00:22:34.469 "vs": { 00:22:34.469 "nvme_version": "1.3" 00:22:34.469 }, 00:22:34.469 "ns_data": { 00:22:34.469 "id": 1, 00:22:34.469 "can_share": true 00:22:34.469 } 00:22:34.469 } 00:22:34.469 ], 00:22:34.469 "mp_policy": "active_passive" 00:22:34.469 } 00:22:34.469 } 00:22:34.469 ] 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OAGkvrLBWE 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OAGkvrLBWE 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.469 [2024-07-15 12:57:05.376044] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.469 [2024-07-15 12:57:05.376170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OAGkvrLBWE 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.469 [2024-07-15 12:57:05.384061] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OAGkvrLBWE 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.469 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.469 [2024-07-15 12:57:05.392097] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.469 [2024-07-15 12:57:05.392135] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:34.729 nvme0n1 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.729 [ 00:22:34.729 { 00:22:34.729 "name": "nvme0n1", 00:22:34.729 "aliases": [ 00:22:34.729 "f9c5e2f2-9523-4da2-a17a-0f16df397d9b" 00:22:34.729 ], 00:22:34.729 "product_name": "NVMe disk", 00:22:34.729 "block_size": 512, 00:22:34.729 "num_blocks": 2097152, 00:22:34.729 "uuid": "f9c5e2f2-9523-4da2-a17a-0f16df397d9b", 00:22:34.729 "assigned_rate_limits": { 00:22:34.729 "rw_ios_per_sec": 0, 00:22:34.729 "rw_mbytes_per_sec": 0, 00:22:34.729 "r_mbytes_per_sec": 0, 00:22:34.729 "w_mbytes_per_sec": 0 00:22:34.729 }, 00:22:34.729 "claimed": false, 00:22:34.729 "zoned": false, 00:22:34.729 "supported_io_types": { 00:22:34.729 "read": true, 00:22:34.729 "write": true, 00:22:34.729 "unmap": false, 00:22:34.729 "flush": true, 00:22:34.729 "reset": true, 00:22:34.729 "nvme_admin": true, 00:22:34.729 "nvme_io": true, 00:22:34.729 "nvme_io_md": false, 00:22:34.729 "write_zeroes": true, 00:22:34.729 "zcopy": false, 00:22:34.729 "get_zone_info": false, 00:22:34.729 "zone_management": false, 00:22:34.729 "zone_append": false, 00:22:34.729 "compare": true, 00:22:34.729 "compare_and_write": true, 00:22:34.729 "abort": true, 00:22:34.729 "seek_hole": false, 00:22:34.729 "seek_data": false, 00:22:34.729 "copy": true, 00:22:34.729 "nvme_iov_md": false 00:22:34.729 }, 00:22:34.729 "memory_domains": [ 00:22:34.729 { 00:22:34.729 "dma_device_id": "system", 00:22:34.729 "dma_device_type": 1 00:22:34.729 } 00:22:34.729 ], 00:22:34.729 "driver_specific": { 00:22:34.729 "nvme": [ 00:22:34.729 { 00:22:34.729 "trid": { 00:22:34.729 "trtype": "TCP", 00:22:34.729 "adrfam": "IPv4", 00:22:34.729 "traddr": "10.0.0.2", 00:22:34.729 "trsvcid": "4421", 00:22:34.729 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:34.729 }, 00:22:34.729 "ctrlr_data": { 00:22:34.729 "cntlid": 3, 00:22:34.729 "vendor_id": "0x8086", 00:22:34.729 "model_number": "SPDK bdev Controller", 00:22:34.729 "serial_number": "00000000000000000000", 00:22:34.729 "firmware_revision": "24.09", 00:22:34.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.729 "oacs": { 00:22:34.729 "security": 0, 00:22:34.729 "format": 0, 00:22:34.729 "firmware": 0, 00:22:34.729 "ns_manage": 0 00:22:34.729 }, 00:22:34.729 "multi_ctrlr": true, 00:22:34.729 "ana_reporting": false 00:22:34.729 }, 00:22:34.729 "vs": { 00:22:34.729 "nvme_version": "1.3" 00:22:34.729 }, 00:22:34.729 "ns_data": { 00:22:34.729 "id": 1, 00:22:34.729 "can_share": true 00:22:34.729 } 00:22:34.729 } 00:22:34.729 ], 00:22:34.729 "mp_policy": "active_passive" 00:22:34.729 } 00:22:34.729 } 00:22:34.729 ] 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.729 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.OAGkvrLBWE 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.730 rmmod nvme_tcp 00:22:34.730 rmmod nvme_fabrics 00:22:34.730 rmmod nvme_keyring 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1792788 ']' 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1792788 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1792788 ']' 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1792788 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1792788 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1792788' 00:22:34.730 killing process with pid 1792788 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1792788 00:22:34.730 [2024-07-15 12:57:05.598176] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:34.730 [2024-07-15 12:57:05.598201] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:34.730 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1792788 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.990 12:57:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.897 12:57:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.898 00:22:36.898 real 0m9.715s 00:22:36.898 user 0m3.533s 00:22:36.898 sys 0m4.739s 00:22:36.898 12:57:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.898 12:57:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.898 ************************************ 00:22:36.898 END TEST nvmf_async_init 00:22:36.898 ************************************ 00:22:37.157 12:57:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.157 12:57:07 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:37.157 12:57:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.157 12:57:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.157 12:57:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.157 ************************************ 00:22:37.157 START TEST dma 00:22:37.157 ************************************ 00:22:37.157 12:57:07 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:37.157 * Looking for test storage... 00:22:37.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.157 12:57:07 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.157 12:57:07 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.157 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.157 12:57:08 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.157 12:57:08 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.157 12:57:08 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.157 12:57:08 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.157 12:57:08 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.157 12:57:08 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.157 12:57:08 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:37.158 12:57:08 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.158 12:57:08 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.158 12:57:08 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:37.158 12:57:08 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:37.158 00:22:37.158 real 0m0.118s 00:22:37.158 user 0m0.053s 00:22:37.158 sys 0m0.073s 00:22:37.158 12:57:08 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.158 12:57:08 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.158 ************************************ 00:22:37.158 END TEST dma 00:22:37.158 ************************************ 00:22:37.158 12:57:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.158 12:57:08 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:37.158 12:57:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.158 12:57:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.158 12:57:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.158 ************************************ 00:22:37.158 START TEST nvmf_identify 00:22:37.158 ************************************ 00:22:37.158 12:57:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:37.419 * Looking for test storage... 00:22:37.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.419 12:57:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:42.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:42.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:42.703 Found net devices under 0000:86:00.0: cvl_0_0 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:42.703 Found net devices under 0000:86:00.1: cvl_0_1 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.703 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.704 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:22:42.963 00:22:42.963 --- 10.0.0.2 ping statistics --- 00:22:42.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.963 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:22:42.963 00:22:42.963 --- 10.0.0.1 ping statistics --- 00:22:42.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.963 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.963 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1796962 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1796962 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1796962 ']' 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.964 12:57:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.224 [2024-07-15 12:57:13.948484] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:43.224 [2024-07-15 12:57:13.948527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.224 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.224 [2024-07-15 12:57:14.023374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.224 [2024-07-15 12:57:14.098495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.224 [2024-07-15 12:57:14.098536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.224 [2024-07-15 12:57:14.098543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.224 [2024-07-15 12:57:14.098550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.224 [2024-07-15 12:57:14.098555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.224 [2024-07-15 12:57:14.098611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.224 [2024-07-15 12:57:14.098646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.224 [2024-07-15 12:57:14.098751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.224 [2024-07-15 12:57:14.098752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 [2024-07-15 12:57:14.769931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 Malloc0 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 [2024-07-15 12:57:14.857819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.168 [ 00:22:44.168 { 00:22:44.168 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:44.168 "subtype": "Discovery", 00:22:44.168 "listen_addresses": [ 00:22:44.168 { 00:22:44.168 "trtype": "TCP", 00:22:44.168 "adrfam": "IPv4", 00:22:44.168 "traddr": "10.0.0.2", 00:22:44.168 "trsvcid": "4420" 00:22:44.168 } 00:22:44.168 ], 00:22:44.168 "allow_any_host": true, 00:22:44.168 "hosts": [] 00:22:44.168 }, 00:22:44.168 { 00:22:44.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.168 "subtype": "NVMe", 00:22:44.168 "listen_addresses": [ 00:22:44.168 { 00:22:44.168 "trtype": "TCP", 00:22:44.168 "adrfam": "IPv4", 00:22:44.168 "traddr": "10.0.0.2", 00:22:44.168 "trsvcid": "4420" 00:22:44.168 } 00:22:44.168 ], 00:22:44.168 "allow_any_host": true, 00:22:44.168 "hosts": [], 00:22:44.168 "serial_number": "SPDK00000000000001", 00:22:44.168 "model_number": "SPDK bdev Controller", 00:22:44.168 "max_namespaces": 32, 00:22:44.168 "min_cntlid": 1, 00:22:44.168 "max_cntlid": 65519, 00:22:44.168 "namespaces": [ 00:22:44.168 { 00:22:44.168 "nsid": 1, 00:22:44.168 "bdev_name": "Malloc0", 00:22:44.168 "name": "Malloc0", 00:22:44.168 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:44.168 "eui64": "ABCDEF0123456789", 00:22:44.168 "uuid": "741c73b2-98d8-4ac6-babd-c9441778ec45" 00:22:44.168 } 00:22:44.168 ] 00:22:44.168 } 00:22:44.168 ] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.168 12:57:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:44.168 [2024-07-15 12:57:14.907525] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:44.168 [2024-07-15 12:57:14.907558] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797211 ] 00:22:44.168 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.168 [2024-07-15 12:57:14.937767] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:44.169 [2024-07-15 12:57:14.937816] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:44.169 [2024-07-15 12:57:14.937820] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:44.169 [2024-07-15 12:57:14.937834] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:44.169 [2024-07-15 12:57:14.937840] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:44.169 [2024-07-15 12:57:14.938209] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:44.169 [2024-07-15 12:57:14.938245] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x247bec0 0 00:22:44.169 [2024-07-15 12:57:14.952236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:44.169 [2024-07-15 12:57:14.952248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:44.169 [2024-07-15 12:57:14.952252] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:44.169 [2024-07-15 12:57:14.952255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:44.169 [2024-07-15 12:57:14.952291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.952296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.952300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.952312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:44.169 [2024-07-15 12:57:14.952329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.960236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.960244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.960247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.960260] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:44.169 [2024-07-15 12:57:14.960266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:44.169 [2024-07-15 12:57:14.960271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:44.169 [2024-07-15 12:57:14.960283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.960297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.960310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.960481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.960487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.960490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.960501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:44.169 [2024-07-15 12:57:14.960507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:44.169 [2024-07-15 12:57:14.960514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.960526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.960536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.960609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.960615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.960618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.960626] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:44.169 [2024-07-15 12:57:14.960633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:44.169 [2024-07-15 12:57:14.960639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.960651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.960660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.960728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.960734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.960737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.960744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:44.169 [2024-07-15 12:57:14.960752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.960764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.960773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.960846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.960851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.960854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.960862] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:44.169 [2024-07-15 12:57:14.960865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:44.169 [2024-07-15 12:57:14.960874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:44.169 [2024-07-15 12:57:14.960979] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:44.169 [2024-07-15 12:57:14.960983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:44.169 [2024-07-15 12:57:14.960991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.960997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.961003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.961013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.961083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.961088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.961091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.961099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:44.169 [2024-07-15 12:57:14.961106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.961119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.961127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.961200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.961206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.961209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.169 [2024-07-15 12:57:14.961216] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:44.169 [2024-07-15 12:57:14.961220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:44.169 [2024-07-15 12:57:14.961233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:44.169 [2024-07-15 12:57:14.961241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:44.169 [2024-07-15 12:57:14.961249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.169 [2024-07-15 12:57:14.961259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.169 [2024-07-15 12:57:14.961269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.169 [2024-07-15 12:57:14.961360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.169 [2024-07-15 12:57:14.961366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.169 [2024-07-15 12:57:14.961369] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961373] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247bec0): datao=0, datal=4096, cccid=0 00:22:44.169 [2024-07-15 12:57:14.961377] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24fee40) on tqpair(0x247bec0): expected_datao=0, payload_size=4096 00:22:44.169 [2024-07-15 12:57:14.961381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961405] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961409] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.169 [2024-07-15 12:57:14.961454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.169 [2024-07-15 12:57:14.961457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.169 [2024-07-15 12:57:14.961460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.170 [2024-07-15 12:57:14.961467] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:44.170 [2024-07-15 12:57:14.961474] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:44.170 [2024-07-15 12:57:14.961478] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:44.170 [2024-07-15 12:57:14.961483] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:44.170 [2024-07-15 12:57:14.961486] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:44.170 [2024-07-15 12:57:14.961490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:44.170 [2024-07-15 12:57:14.961499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:44.170 [2024-07-15 12:57:14.961505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.170 [2024-07-15 12:57:14.961527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.170 [2024-07-15 12:57:14.961602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.170 [2024-07-15 12:57:14.961608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.170 [2024-07-15 12:57:14.961611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.170 [2024-07-15 12:57:14.961620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.170 [2024-07-15 12:57:14.961637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.170 [2024-07-15 12:57:14.961656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.170 [2024-07-15 12:57:14.961672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.170 [2024-07-15 12:57:14.961687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:44.170 [2024-07-15 12:57:14.961697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:44.170 [2024-07-15 12:57:14.961703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.170 [2024-07-15 12:57:14.961722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fee40, cid 0, qid 0 00:22:44.170 [2024-07-15 12:57:14.961726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24fefc0, cid 1, qid 0 00:22:44.170 [2024-07-15 12:57:14.961730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff140, cid 2, qid 0 00:22:44.170 [2024-07-15 12:57:14.961734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.170 [2024-07-15 12:57:14.961738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff440, cid 4, qid 0 00:22:44.170 [2024-07-15 12:57:14.961839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.170 [2024-07-15 12:57:14.961845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.170 [2024-07-15 12:57:14.961848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff440) on tqpair=0x247bec0 00:22:44.170 [2024-07-15 12:57:14.961856] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:44.170 [2024-07-15 12:57:14.961860] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:44.170 [2024-07-15 12:57:14.961869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.961878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.170 [2024-07-15 12:57:14.961887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff440, cid 4, qid 0 00:22:44.170 [2024-07-15 12:57:14.961969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.170 [2024-07-15 12:57:14.961974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.170 [2024-07-15 12:57:14.961978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961981] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247bec0): datao=0, datal=4096, cccid=4 00:22:44.170 [2024-07-15 12:57:14.961987] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ff440) on tqpair(0x247bec0): expected_datao=0, payload_size=4096 00:22:44.170 [2024-07-15 12:57:14.961991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961996] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.961999] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.170 [2024-07-15 12:57:14.962046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.170 [2024-07-15 12:57:14.962049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff440) on tqpair=0x247bec0 00:22:44.170 [2024-07-15 12:57:14.962063] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:44.170 [2024-07-15 12:57:14.962083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.962093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.170 [2024-07-15 12:57:14.962098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:14.962110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.170 [2024-07-15 12:57:14.962123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff440, cid 4, qid 0 00:22:44.170 [2024-07-15 12:57:14.962127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff5c0, cid 5, qid 0 00:22:44.170 [2024-07-15 12:57:14.962235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.170 [2024-07-15 12:57:14.962241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.170 [2024-07-15 12:57:14.962244] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962247] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247bec0): datao=0, datal=1024, cccid=4 00:22:44.170 [2024-07-15 12:57:14.962251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ff440) on tqpair(0x247bec0): expected_datao=0, payload_size=1024 00:22:44.170 [2024-07-15 12:57:14.962254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962260] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962263] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.170 [2024-07-15 12:57:14.962273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.170 [2024-07-15 12:57:14.962276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:14.962279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff5c0) on tqpair=0x247bec0 00:22:44.170 [2024-07-15 12:57:15.003365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.170 [2024-07-15 12:57:15.003376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.170 [2024-07-15 12:57:15.003379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:15.003383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff440) on tqpair=0x247bec0 00:22:44.170 [2024-07-15 12:57:15.003398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:15.003402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247bec0) 00:22:44.170 [2024-07-15 12:57:15.003415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.170 [2024-07-15 12:57:15.003432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff440, cid 4, qid 0 00:22:44.170 [2024-07-15 12:57:15.003514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.170 [2024-07-15 12:57:15.003520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.170 [2024-07-15 12:57:15.003523] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:15.003526] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247bec0): datao=0, datal=3072, cccid=4 00:22:44.170 [2024-07-15 12:57:15.003530] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ff440) on tqpair(0x247bec0): expected_datao=0, payload_size=3072 00:22:44.170 [2024-07-15 12:57:15.003534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:15.003540] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.170 [2024-07-15 12:57:15.003543] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.003591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.171 [2024-07-15 12:57:15.003597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.171 [2024-07-15 12:57:15.003599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.003603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff440) on tqpair=0x247bec0 00:22:44.171 [2024-07-15 12:57:15.003610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.003613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247bec0) 00:22:44.171 [2024-07-15 12:57:15.003619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.171 [2024-07-15 12:57:15.003632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff440, cid 4, qid 0 00:22:44.171 [2024-07-15 12:57:15.003707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.171 [2024-07-15 12:57:15.003713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.171 [2024-07-15 12:57:15.003716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.003719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247bec0): datao=0, datal=8, cccid=4 00:22:44.171 [2024-07-15 12:57:15.003723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24ff440) on tqpair(0x247bec0): expected_datao=0, payload_size=8 00:22:44.171 [2024-07-15 12:57:15.003726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.003732] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.003735] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.048238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.171 [2024-07-15 12:57:15.048254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.171 [2024-07-15 12:57:15.048257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.171 [2024-07-15 12:57:15.048261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff440) on tqpair=0x247bec0 00:22:44.171 ===================================================== 00:22:44.171 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:44.171 ===================================================== 00:22:44.171 Controller Capabilities/Features 00:22:44.171 ================================ 00:22:44.171 Vendor ID: 0000 00:22:44.171 Subsystem Vendor ID: 0000 00:22:44.171 Serial Number: .................... 00:22:44.171 Model Number: ........................................ 00:22:44.171 Firmware Version: 24.09 00:22:44.171 Recommended Arb Burst: 0 00:22:44.171 IEEE OUI Identifier: 00 00 00 00:22:44.171 Multi-path I/O 00:22:44.171 May have multiple subsystem ports: No 00:22:44.171 May have multiple controllers: No 00:22:44.171 Associated with SR-IOV VF: No 00:22:44.171 Max Data Transfer Size: 131072 00:22:44.171 Max Number of Namespaces: 0 00:22:44.171 Max Number of I/O Queues: 1024 00:22:44.171 NVMe Specification Version (VS): 1.3 00:22:44.171 NVMe Specification Version (Identify): 1.3 00:22:44.171 Maximum Queue Entries: 128 00:22:44.171 Contiguous Queues Required: Yes 00:22:44.171 Arbitration Mechanisms Supported 00:22:44.171 Weighted Round Robin: Not Supported 00:22:44.171 Vendor Specific: Not Supported 00:22:44.171 Reset Timeout: 15000 ms 00:22:44.171 Doorbell Stride: 4 bytes 00:22:44.171 NVM Subsystem Reset: Not Supported 00:22:44.171 Command Sets Supported 00:22:44.171 NVM Command Set: Supported 00:22:44.171 Boot Partition: Not Supported 00:22:44.171 Memory Page Size Minimum: 4096 bytes 00:22:44.171 Memory Page Size Maximum: 4096 bytes 00:22:44.171 Persistent Memory Region: Not Supported 00:22:44.171 Optional Asynchronous Events Supported 00:22:44.171 Namespace Attribute Notices: Not Supported 00:22:44.171 Firmware Activation Notices: Not Supported 00:22:44.171 ANA Change Notices: Not Supported 00:22:44.171 PLE Aggregate Log Change Notices: Not Supported 00:22:44.171 LBA Status Info Alert Notices: Not Supported 00:22:44.171 EGE Aggregate Log Change Notices: Not Supported 00:22:44.171 Normal NVM Subsystem Shutdown event: Not Supported 00:22:44.171 Zone Descriptor Change Notices: Not Supported 00:22:44.171 Discovery Log Change Notices: Supported 00:22:44.171 Controller Attributes 00:22:44.171 128-bit Host Identifier: Not Supported 00:22:44.171 Non-Operational Permissive Mode: Not Supported 00:22:44.171 NVM Sets: Not Supported 00:22:44.171 Read Recovery Levels: Not Supported 00:22:44.171 Endurance Groups: Not Supported 00:22:44.171 Predictable Latency Mode: Not Supported 00:22:44.171 Traffic Based Keep ALive: Not Supported 00:22:44.171 Namespace Granularity: Not Supported 00:22:44.171 SQ Associations: Not Supported 00:22:44.171 UUID List: Not Supported 00:22:44.171 Multi-Domain Subsystem: Not Supported 00:22:44.171 Fixed Capacity Management: Not Supported 00:22:44.171 Variable Capacity Management: Not Supported 00:22:44.171 Delete Endurance Group: Not Supported 00:22:44.171 Delete NVM Set: Not Supported 00:22:44.171 Extended LBA Formats Supported: Not Supported 00:22:44.171 Flexible Data Placement Supported: Not Supported 00:22:44.171 00:22:44.171 Controller Memory Buffer Support 00:22:44.171 ================================ 00:22:44.171 Supported: No 00:22:44.171 00:22:44.171 Persistent Memory Region Support 00:22:44.171 ================================ 00:22:44.171 Supported: No 00:22:44.171 00:22:44.171 Admin Command Set Attributes 00:22:44.171 ============================ 00:22:44.171 Security Send/Receive: Not Supported 00:22:44.171 Format NVM: Not Supported 00:22:44.171 Firmware Activate/Download: Not Supported 00:22:44.171 Namespace Management: Not Supported 00:22:44.171 Device Self-Test: Not Supported 00:22:44.171 Directives: Not Supported 00:22:44.171 NVMe-MI: Not Supported 00:22:44.171 Virtualization Management: Not Supported 00:22:44.171 Doorbell Buffer Config: Not Supported 00:22:44.171 Get LBA Status Capability: Not Supported 00:22:44.171 Command & Feature Lockdown Capability: Not Supported 00:22:44.171 Abort Command Limit: 1 00:22:44.171 Async Event Request Limit: 4 00:22:44.171 Number of Firmware Slots: N/A 00:22:44.171 Firmware Slot 1 Read-Only: N/A 00:22:44.171 Firmware Activation Without Reset: N/A 00:22:44.171 Multiple Update Detection Support: N/A 00:22:44.171 Firmware Update Granularity: No Information Provided 00:22:44.171 Per-Namespace SMART Log: No 00:22:44.171 Asymmetric Namespace Access Log Page: Not Supported 00:22:44.171 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:44.171 Command Effects Log Page: Not Supported 00:22:44.171 Get Log Page Extended Data: Supported 00:22:44.171 Telemetry Log Pages: Not Supported 00:22:44.171 Persistent Event Log Pages: Not Supported 00:22:44.171 Supported Log Pages Log Page: May Support 00:22:44.171 Commands Supported & Effects Log Page: Not Supported 00:22:44.171 Feature Identifiers & Effects Log Page:May Support 00:22:44.171 NVMe-MI Commands & Effects Log Page: May Support 00:22:44.171 Data Area 4 for Telemetry Log: Not Supported 00:22:44.171 Error Log Page Entries Supported: 128 00:22:44.171 Keep Alive: Not Supported 00:22:44.171 00:22:44.171 NVM Command Set Attributes 00:22:44.171 ========================== 00:22:44.171 Submission Queue Entry Size 00:22:44.171 Max: 1 00:22:44.171 Min: 1 00:22:44.171 Completion Queue Entry Size 00:22:44.171 Max: 1 00:22:44.171 Min: 1 00:22:44.171 Number of Namespaces: 0 00:22:44.171 Compare Command: Not Supported 00:22:44.171 Write Uncorrectable Command: Not Supported 00:22:44.171 Dataset Management Command: Not Supported 00:22:44.171 Write Zeroes Command: Not Supported 00:22:44.171 Set Features Save Field: Not Supported 00:22:44.171 Reservations: Not Supported 00:22:44.171 Timestamp: Not Supported 00:22:44.171 Copy: Not Supported 00:22:44.171 Volatile Write Cache: Not Present 00:22:44.171 Atomic Write Unit (Normal): 1 00:22:44.171 Atomic Write Unit (PFail): 1 00:22:44.171 Atomic Compare & Write Unit: 1 00:22:44.171 Fused Compare & Write: Supported 00:22:44.171 Scatter-Gather List 00:22:44.171 SGL Command Set: Supported 00:22:44.171 SGL Keyed: Supported 00:22:44.171 SGL Bit Bucket Descriptor: Not Supported 00:22:44.171 SGL Metadata Pointer: Not Supported 00:22:44.171 Oversized SGL: Not Supported 00:22:44.171 SGL Metadata Address: Not Supported 00:22:44.171 SGL Offset: Supported 00:22:44.171 Transport SGL Data Block: Not Supported 00:22:44.171 Replay Protected Memory Block: Not Supported 00:22:44.171 00:22:44.171 Firmware Slot Information 00:22:44.171 ========================= 00:22:44.171 Active slot: 0 00:22:44.171 00:22:44.171 00:22:44.171 Error Log 00:22:44.171 ========= 00:22:44.171 00:22:44.171 Active Namespaces 00:22:44.171 ================= 00:22:44.171 Discovery Log Page 00:22:44.171 ================== 00:22:44.171 Generation Counter: 2 00:22:44.171 Number of Records: 2 00:22:44.171 Record Format: 0 00:22:44.171 00:22:44.171 Discovery Log Entry 0 00:22:44.171 ---------------------- 00:22:44.171 Transport Type: 3 (TCP) 00:22:44.171 Address Family: 1 (IPv4) 00:22:44.171 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:44.171 Entry Flags: 00:22:44.171 Duplicate Returned Information: 1 00:22:44.171 Explicit Persistent Connection Support for Discovery: 1 00:22:44.172 Transport Requirements: 00:22:44.172 Secure Channel: Not Required 00:22:44.172 Port ID: 0 (0x0000) 00:22:44.172 Controller ID: 65535 (0xffff) 00:22:44.172 Admin Max SQ Size: 128 00:22:44.172 Transport Service Identifier: 4420 00:22:44.172 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:44.172 Transport Address: 10.0.0.2 00:22:44.172 Discovery Log Entry 1 00:22:44.172 ---------------------- 00:22:44.172 Transport Type: 3 (TCP) 00:22:44.172 Address Family: 1 (IPv4) 00:22:44.172 Subsystem Type: 2 (NVM Subsystem) 00:22:44.172 Entry Flags: 00:22:44.172 Duplicate Returned Information: 0 00:22:44.172 Explicit Persistent Connection Support for Discovery: 0 00:22:44.172 Transport Requirements: 00:22:44.172 Secure Channel: Not Required 00:22:44.172 Port ID: 0 (0x0000) 00:22:44.172 Controller ID: 65535 (0xffff) 00:22:44.172 Admin Max SQ Size: 128 00:22:44.172 Transport Service Identifier: 4420 00:22:44.172 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:44.172 Transport Address: 10.0.0.2 [2024-07-15 12:57:15.048338] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:44.172 [2024-07-15 12:57:15.048348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fee40) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.172 [2024-07-15 12:57:15.048360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24fefc0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.172 [2024-07-15 12:57:15.048369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff140) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.172 [2024-07-15 12:57:15.048378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.172 [2024-07-15 12:57:15.048391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.048406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.048420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.048486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.048493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.048496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.048519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.048532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.048612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.048618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.048621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048629] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:44.172 [2024-07-15 12:57:15.048633] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:44.172 [2024-07-15 12:57:15.048641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.048654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.048664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.048769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.048775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.048778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.048806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.048816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.048885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.048891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.048894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.048905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.048912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.048918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.048927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.048994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.049000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.049003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.049014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.049027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.049037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.049105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.049110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.049113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.049124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.049137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.049146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.049217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.049222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.172 [2024-07-15 12:57:15.049232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.172 [2024-07-15 12:57:15.049244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.172 [2024-07-15 12:57:15.049251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.172 [2024-07-15 12:57:15.049259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.172 [2024-07-15 12:57:15.049269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.172 [2024-07-15 12:57:15.049337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.172 [2024-07-15 12:57:15.049343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.049346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.049358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.049370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.049379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.049451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.049457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.049460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.049471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.049484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.049493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.049557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.049563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.049567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.049578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.049590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.049599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.049666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.049672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.049676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.049687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.049700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.049711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.049776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.049781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.049784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.049796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.049809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.049818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.049885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.049891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.049894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.049905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.049912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.049917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.049926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.049993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.049999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.050002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.050013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.050026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.050035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.050103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.050108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.050111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.050123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.050136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.050145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.050211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.050217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.050220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.050236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.050249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.050259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.050326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.050332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.050335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.050347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.173 [2024-07-15 12:57:15.050360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.173 [2024-07-15 12:57:15.050369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.173 [2024-07-15 12:57:15.050435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.173 [2024-07-15 12:57:15.050441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.173 [2024-07-15 12:57:15.050444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.173 [2024-07-15 12:57:15.050456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.173 [2024-07-15 12:57:15.050463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.050468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.050478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.050545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.050550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.050553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.050565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.050578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.050587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.050656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.050663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.050667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.050678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.050691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.050701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.050770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.050775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.050778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.050790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.050802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.050812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.050880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.050887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.050890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.050901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.050908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.050914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.050924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.050990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.050996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.050999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.174 [2024-07-15 12:57:15.051705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.174 [2024-07-15 12:57:15.051714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.174 [2024-07-15 12:57:15.051781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.174 [2024-07-15 12:57:15.051787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.174 [2024-07-15 12:57:15.051790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.174 [2024-07-15 12:57:15.051801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.174 [2024-07-15 12:57:15.051808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.175 [2024-07-15 12:57:15.051814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.175 [2024-07-15 12:57:15.051823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.175 [2024-07-15 12:57:15.055232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.175 [2024-07-15 12:57:15.055241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.175 [2024-07-15 12:57:15.055244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.175 [2024-07-15 12:57:15.055248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.175 [2024-07-15 12:57:15.055259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.175 [2024-07-15 12:57:15.055263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.175 [2024-07-15 12:57:15.055266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247bec0) 00:22:44.175 [2024-07-15 12:57:15.055273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.175 [2024-07-15 12:57:15.055284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24ff2c0, cid 3, qid 0 00:22:44.175 [2024-07-15 12:57:15.055438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.175 [2024-07-15 12:57:15.055444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.175 [2024-07-15 12:57:15.055448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.175 [2024-07-15 12:57:15.055451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24ff2c0) on tqpair=0x247bec0 00:22:44.175 [2024-07-15 12:57:15.055459] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:44.175 00:22:44.175 12:57:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:44.175 [2024-07-15 12:57:15.079917] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:44.175 [2024-07-15 12:57:15.079942] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797215 ] 00:22:44.175 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.175 [2024-07-15 12:57:15.104574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:44.175 [2024-07-15 12:57:15.104614] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:44.175 [2024-07-15 12:57:15.104619] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:44.175 [2024-07-15 12:57:15.104629] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:44.175 [2024-07-15 12:57:15.104634] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:44.175 [2024-07-15 12:57:15.104957] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:44.175 [2024-07-15 12:57:15.104980] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xef5ec0 0 00:22:44.439 [2024-07-15 12:57:15.122234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:44.439 [2024-07-15 12:57:15.122247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:44.439 [2024-07-15 12:57:15.122251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:44.439 [2024-07-15 12:57:15.122255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:44.439 [2024-07-15 12:57:15.122284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.122289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.122293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.439 [2024-07-15 12:57:15.122303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:44.439 [2024-07-15 12:57:15.122318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.439 [2024-07-15 12:57:15.129234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.439 [2024-07-15 12:57:15.129243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.439 [2024-07-15 12:57:15.129247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.439 [2024-07-15 12:57:15.129260] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:44.439 [2024-07-15 12:57:15.129266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:44.439 [2024-07-15 12:57:15.129271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:44.439 [2024-07-15 12:57:15.129281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.439 [2024-07-15 12:57:15.129298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.439 [2024-07-15 12:57:15.129311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.439 [2024-07-15 12:57:15.129475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.439 [2024-07-15 12:57:15.129481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.439 [2024-07-15 12:57:15.129484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.439 [2024-07-15 12:57:15.129491] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:44.439 [2024-07-15 12:57:15.129498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:44.439 [2024-07-15 12:57:15.129504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.439 [2024-07-15 12:57:15.129516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.439 [2024-07-15 12:57:15.129526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.439 [2024-07-15 12:57:15.129599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.439 [2024-07-15 12:57:15.129605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.439 [2024-07-15 12:57:15.129608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.439 [2024-07-15 12:57:15.129615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:44.439 [2024-07-15 12:57:15.129622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:44.439 [2024-07-15 12:57:15.129628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.439 [2024-07-15 12:57:15.129639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.439 [2024-07-15 12:57:15.129648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.439 [2024-07-15 12:57:15.129721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.439 [2024-07-15 12:57:15.129727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.439 [2024-07-15 12:57:15.129730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.439 [2024-07-15 12:57:15.129737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:44.439 [2024-07-15 12:57:15.129745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.439 [2024-07-15 12:57:15.129758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.439 [2024-07-15 12:57:15.129767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.439 [2024-07-15 12:57:15.129838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.439 [2024-07-15 12:57:15.129844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.439 [2024-07-15 12:57:15.129847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.439 [2024-07-15 12:57:15.129850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.439 [2024-07-15 12:57:15.129854] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:44.439 [2024-07-15 12:57:15.129858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:44.439 [2024-07-15 12:57:15.129864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:44.439 [2024-07-15 12:57:15.129969] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:44.439 [2024-07-15 12:57:15.129973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:44.440 [2024-07-15 12:57:15.129979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.129983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.129986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.129992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.440 [2024-07-15 12:57:15.130001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.440 [2024-07-15 12:57:15.130066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.440 [2024-07-15 12:57:15.130072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.440 [2024-07-15 12:57:15.130075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.440 [2024-07-15 12:57:15.130082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:44.440 [2024-07-15 12:57:15.130089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.130102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.440 [2024-07-15 12:57:15.130111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.440 [2024-07-15 12:57:15.130183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.440 [2024-07-15 12:57:15.130189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.440 [2024-07-15 12:57:15.130192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.440 [2024-07-15 12:57:15.130199] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:44.440 [2024-07-15 12:57:15.130203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.130209] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:44.440 [2024-07-15 12:57:15.130216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.130223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.130240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.440 [2024-07-15 12:57:15.130250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.440 [2024-07-15 12:57:15.130362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.440 [2024-07-15 12:57:15.130368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.440 [2024-07-15 12:57:15.130371] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130374] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=4096, cccid=0 00:22:44.440 [2024-07-15 12:57:15.130378] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf78e40) on tqpair(0xef5ec0): expected_datao=0, payload_size=4096 00:22:44.440 [2024-07-15 12:57:15.130381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130401] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.130405] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.440 [2024-07-15 12:57:15.171376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.440 [2024-07-15 12:57:15.171379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.440 [2024-07-15 12:57:15.171390] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:44.440 [2024-07-15 12:57:15.171396] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:44.440 [2024-07-15 12:57:15.171400] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:44.440 [2024-07-15 12:57:15.171404] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:44.440 [2024-07-15 12:57:15.171408] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:44.440 [2024-07-15 12:57:15.171412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.440 [2024-07-15 12:57:15.171452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.440 [2024-07-15 12:57:15.171519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.440 [2024-07-15 12:57:15.171525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.440 [2024-07-15 12:57:15.171528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.440 [2024-07-15 12:57:15.171537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.440 [2024-07-15 12:57:15.171556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.440 [2024-07-15 12:57:15.171572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.440 [2024-07-15 12:57:15.171588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.440 [2024-07-15 12:57:15.171602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.440 [2024-07-15 12:57:15.171637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78e40, cid 0, qid 0 00:22:44.440 [2024-07-15 12:57:15.171641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf78fc0, cid 1, qid 0 00:22:44.440 [2024-07-15 12:57:15.171645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79140, cid 2, qid 0 00:22:44.440 [2024-07-15 12:57:15.171649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.440 [2024-07-15 12:57:15.171653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.440 [2024-07-15 12:57:15.171755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.440 [2024-07-15 12:57:15.171761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.440 [2024-07-15 12:57:15.171763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.440 [2024-07-15 12:57:15.171771] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:44.440 [2024-07-15 12:57:15.171775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.440 [2024-07-15 12:57:15.171815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.440 [2024-07-15 12:57:15.171882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.440 [2024-07-15 12:57:15.171888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.440 [2024-07-15 12:57:15.171891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.440 [2024-07-15 12:57:15.171945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:44.440 [2024-07-15 12:57:15.171960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.440 [2024-07-15 12:57:15.171964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.440 [2024-07-15 12:57:15.171969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.440 [2024-07-15 12:57:15.171978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.440 [2024-07-15 12:57:15.172059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.441 [2024-07-15 12:57:15.172065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.441 [2024-07-15 12:57:15.172068] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.172071] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=4096, cccid=4 00:22:44.441 [2024-07-15 12:57:15.172075] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf79440) on tqpair(0xef5ec0): expected_datao=0, payload_size=4096 00:22:44.441 [2024-07-15 12:57:15.172079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.172103] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.172107] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.215233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.215243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.215246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.215249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.215258] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:44.441 [2024-07-15 12:57:15.215270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.215279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.215285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.215289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.215295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.215307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.441 [2024-07-15 12:57:15.215476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.441 [2024-07-15 12:57:15.215484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.441 [2024-07-15 12:57:15.215487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.215490] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=4096, cccid=4 00:22:44.441 [2024-07-15 12:57:15.215494] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf79440) on tqpair(0xef5ec0): expected_datao=0, payload_size=4096 00:22:44.441 [2024-07-15 12:57:15.215498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.215517] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.215521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.256383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.256392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.256395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.256398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.256411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.256419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.256427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.256430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.256437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.256448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.441 [2024-07-15 12:57:15.256529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.441 [2024-07-15 12:57:15.256535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.441 [2024-07-15 12:57:15.256538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.256541] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=4096, cccid=4 00:22:44.441 [2024-07-15 12:57:15.256545] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf79440) on tqpair(0xef5ec0): expected_datao=0, payload_size=4096 00:22:44.441 [2024-07-15 12:57:15.256548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.256567] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.256571] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.297385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.297388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.297399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297436] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:44.441 [2024-07-15 12:57:15.297440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:44.441 [2024-07-15 12:57:15.297444] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:44.441 [2024-07-15 12:57:15.297457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.297467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.297473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.297485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.441 [2024-07-15 12:57:15.297498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.441 [2024-07-15 12:57:15.297502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf795c0, cid 5, qid 0 00:22:44.441 [2024-07-15 12:57:15.297597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.297603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.297606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.297614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.297619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.297622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf795c0) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.297633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.297642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.297651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf795c0, cid 5, qid 0 00:22:44.441 [2024-07-15 12:57:15.297728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.297734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.297737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf795c0) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.297748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.297757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.297765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf795c0, cid 5, qid 0 00:22:44.441 [2024-07-15 12:57:15.297846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.297852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.297855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf795c0) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.297866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.297869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.297875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.297884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf795c0, cid 5, qid 0 00:22:44.441 [2024-07-15 12:57:15.297993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.441 [2024-07-15 12:57:15.297998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.441 [2024-07-15 12:57:15.298001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.298004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf795c0) on tqpair=0xef5ec0 00:22:44.441 [2024-07-15 12:57:15.298018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.298022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.298028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.298034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.298037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.298042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.441 [2024-07-15 12:57:15.298048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.441 [2024-07-15 12:57:15.298052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xef5ec0) 00:22:44.441 [2024-07-15 12:57:15.298056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.442 [2024-07-15 12:57:15.298063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xef5ec0) 00:22:44.442 [2024-07-15 12:57:15.298071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.442 [2024-07-15 12:57:15.298082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf795c0, cid 5, qid 0 00:22:44.442 [2024-07-15 12:57:15.298086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79440, cid 4, qid 0 00:22:44.442 [2024-07-15 12:57:15.298090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf79740, cid 6, qid 0 00:22:44.442 [2024-07-15 12:57:15.298094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf798c0, cid 7, qid 0 00:22:44.442 [2024-07-15 12:57:15.298241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.442 [2024-07-15 12:57:15.298247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.442 [2024-07-15 12:57:15.298250] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298253] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=8192, cccid=5 00:22:44.442 [2024-07-15 12:57:15.298257] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf795c0) on tqpair(0xef5ec0): expected_datao=0, payload_size=8192 00:22:44.442 [2024-07-15 12:57:15.298261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298351] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298355] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.442 [2024-07-15 12:57:15.298364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.442 [2024-07-15 12:57:15.298367] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298370] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=512, cccid=4 00:22:44.442 [2024-07-15 12:57:15.298374] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf79440) on tqpair(0xef5ec0): expected_datao=0, payload_size=512 00:22:44.442 [2024-07-15 12:57:15.298377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298382] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298385] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.442 [2024-07-15 12:57:15.298395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.442 [2024-07-15 12:57:15.298398] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298400] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=512, cccid=6 00:22:44.442 [2024-07-15 12:57:15.298404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf79740) on tqpair(0xef5ec0): expected_datao=0, payload_size=512 00:22:44.442 [2024-07-15 12:57:15.298408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298413] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298415] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:44.442 [2024-07-15 12:57:15.298425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:44.442 [2024-07-15 12:57:15.298428] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298431] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef5ec0): datao=0, datal=4096, cccid=7 00:22:44.442 [2024-07-15 12:57:15.298434] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf798c0) on tqpair(0xef5ec0): expected_datao=0, payload_size=4096 00:22:44.442 [2024-07-15 12:57:15.298438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298443] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298446] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.442 [2024-07-15 12:57:15.298458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.442 [2024-07-15 12:57:15.298461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf795c0) on tqpair=0xef5ec0 00:22:44.442 [2024-07-15 12:57:15.298474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.442 [2024-07-15 12:57:15.298480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.442 [2024-07-15 12:57:15.298482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79440) on tqpair=0xef5ec0 00:22:44.442 [2024-07-15 12:57:15.298493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.442 [2024-07-15 12:57:15.298498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.442 [2024-07-15 12:57:15.298501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79740) on tqpair=0xef5ec0 00:22:44.442 [2024-07-15 12:57:15.298510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.442 [2024-07-15 12:57:15.298516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.442 [2024-07-15 12:57:15.298519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.442 [2024-07-15 12:57:15.298523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf798c0) on tqpair=0xef5ec0 00:22:44.442 ===================================================== 00:22:44.442 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.442 ===================================================== 00:22:44.442 Controller Capabilities/Features 00:22:44.442 ================================ 00:22:44.442 Vendor ID: 8086 00:22:44.442 Subsystem Vendor ID: 8086 00:22:44.442 Serial Number: SPDK00000000000001 00:22:44.442 Model Number: SPDK bdev Controller 00:22:44.442 Firmware Version: 24.09 00:22:44.442 Recommended Arb Burst: 6 00:22:44.442 IEEE OUI Identifier: e4 d2 5c 00:22:44.442 Multi-path I/O 00:22:44.442 May have multiple subsystem ports: Yes 00:22:44.442 May have multiple controllers: Yes 00:22:44.442 Associated with SR-IOV VF: No 00:22:44.442 Max Data Transfer Size: 131072 00:22:44.442 Max Number of Namespaces: 32 00:22:44.442 Max Number of I/O Queues: 127 00:22:44.442 NVMe Specification Version (VS): 1.3 00:22:44.442 NVMe Specification Version (Identify): 1.3 00:22:44.442 Maximum Queue Entries: 128 00:22:44.442 Contiguous Queues Required: Yes 00:22:44.442 Arbitration Mechanisms Supported 00:22:44.442 Weighted Round Robin: Not Supported 00:22:44.442 Vendor Specific: Not Supported 00:22:44.442 Reset Timeout: 15000 ms 00:22:44.442 Doorbell Stride: 4 bytes 00:22:44.442 NVM Subsystem Reset: Not Supported 00:22:44.442 Command Sets Supported 00:22:44.442 NVM Command Set: Supported 00:22:44.442 Boot Partition: Not Supported 00:22:44.442 Memory Page Size Minimum: 4096 bytes 00:22:44.442 Memory Page Size Maximum: 4096 bytes 00:22:44.442 Persistent Memory Region: Not Supported 00:22:44.442 Optional Asynchronous Events Supported 00:22:44.442 Namespace Attribute Notices: Supported 00:22:44.442 Firmware Activation Notices: Not Supported 00:22:44.442 ANA Change Notices: Not Supported 00:22:44.442 PLE Aggregate Log Change Notices: Not Supported 00:22:44.442 LBA Status Info Alert Notices: Not Supported 00:22:44.442 EGE Aggregate Log Change Notices: Not Supported 00:22:44.442 Normal NVM Subsystem Shutdown event: Not Supported 00:22:44.442 Zone Descriptor Change Notices: Not Supported 00:22:44.442 Discovery Log Change Notices: Not Supported 00:22:44.442 Controller Attributes 00:22:44.442 128-bit Host Identifier: Supported 00:22:44.442 Non-Operational Permissive Mode: Not Supported 00:22:44.442 NVM Sets: Not Supported 00:22:44.442 Read Recovery Levels: Not Supported 00:22:44.442 Endurance Groups: Not Supported 00:22:44.442 Predictable Latency Mode: Not Supported 00:22:44.442 Traffic Based Keep ALive: Not Supported 00:22:44.442 Namespace Granularity: Not Supported 00:22:44.442 SQ Associations: Not Supported 00:22:44.442 UUID List: Not Supported 00:22:44.442 Multi-Domain Subsystem: Not Supported 00:22:44.442 Fixed Capacity Management: Not Supported 00:22:44.442 Variable Capacity Management: Not Supported 00:22:44.442 Delete Endurance Group: Not Supported 00:22:44.442 Delete NVM Set: Not Supported 00:22:44.442 Extended LBA Formats Supported: Not Supported 00:22:44.442 Flexible Data Placement Supported: Not Supported 00:22:44.442 00:22:44.442 Controller Memory Buffer Support 00:22:44.442 ================================ 00:22:44.442 Supported: No 00:22:44.442 00:22:44.442 Persistent Memory Region Support 00:22:44.442 ================================ 00:22:44.442 Supported: No 00:22:44.442 00:22:44.442 Admin Command Set Attributes 00:22:44.442 ============================ 00:22:44.442 Security Send/Receive: Not Supported 00:22:44.442 Format NVM: Not Supported 00:22:44.442 Firmware Activate/Download: Not Supported 00:22:44.442 Namespace Management: Not Supported 00:22:44.442 Device Self-Test: Not Supported 00:22:44.442 Directives: Not Supported 00:22:44.442 NVMe-MI: Not Supported 00:22:44.442 Virtualization Management: Not Supported 00:22:44.442 Doorbell Buffer Config: Not Supported 00:22:44.442 Get LBA Status Capability: Not Supported 00:22:44.442 Command & Feature Lockdown Capability: Not Supported 00:22:44.442 Abort Command Limit: 4 00:22:44.442 Async Event Request Limit: 4 00:22:44.442 Number of Firmware Slots: N/A 00:22:44.442 Firmware Slot 1 Read-Only: N/A 00:22:44.442 Firmware Activation Without Reset: N/A 00:22:44.442 Multiple Update Detection Support: N/A 00:22:44.442 Firmware Update Granularity: No Information Provided 00:22:44.442 Per-Namespace SMART Log: No 00:22:44.442 Asymmetric Namespace Access Log Page: Not Supported 00:22:44.442 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:44.442 Command Effects Log Page: Supported 00:22:44.442 Get Log Page Extended Data: Supported 00:22:44.442 Telemetry Log Pages: Not Supported 00:22:44.442 Persistent Event Log Pages: Not Supported 00:22:44.442 Supported Log Pages Log Page: May Support 00:22:44.442 Commands Supported & Effects Log Page: Not Supported 00:22:44.442 Feature Identifiers & Effects Log Page:May Support 00:22:44.442 NVMe-MI Commands & Effects Log Page: May Support 00:22:44.442 Data Area 4 for Telemetry Log: Not Supported 00:22:44.442 Error Log Page Entries Supported: 128 00:22:44.442 Keep Alive: Supported 00:22:44.442 Keep Alive Granularity: 10000 ms 00:22:44.443 00:22:44.443 NVM Command Set Attributes 00:22:44.443 ========================== 00:22:44.443 Submission Queue Entry Size 00:22:44.443 Max: 64 00:22:44.443 Min: 64 00:22:44.443 Completion Queue Entry Size 00:22:44.443 Max: 16 00:22:44.443 Min: 16 00:22:44.443 Number of Namespaces: 32 00:22:44.443 Compare Command: Supported 00:22:44.443 Write Uncorrectable Command: Not Supported 00:22:44.443 Dataset Management Command: Supported 00:22:44.443 Write Zeroes Command: Supported 00:22:44.443 Set Features Save Field: Not Supported 00:22:44.443 Reservations: Supported 00:22:44.443 Timestamp: Not Supported 00:22:44.443 Copy: Supported 00:22:44.443 Volatile Write Cache: Present 00:22:44.443 Atomic Write Unit (Normal): 1 00:22:44.443 Atomic Write Unit (PFail): 1 00:22:44.443 Atomic Compare & Write Unit: 1 00:22:44.443 Fused Compare & Write: Supported 00:22:44.443 Scatter-Gather List 00:22:44.443 SGL Command Set: Supported 00:22:44.443 SGL Keyed: Supported 00:22:44.443 SGL Bit Bucket Descriptor: Not Supported 00:22:44.443 SGL Metadata Pointer: Not Supported 00:22:44.443 Oversized SGL: Not Supported 00:22:44.443 SGL Metadata Address: Not Supported 00:22:44.443 SGL Offset: Supported 00:22:44.443 Transport SGL Data Block: Not Supported 00:22:44.443 Replay Protected Memory Block: Not Supported 00:22:44.443 00:22:44.443 Firmware Slot Information 00:22:44.443 ========================= 00:22:44.443 Active slot: 1 00:22:44.443 Slot 1 Firmware Revision: 24.09 00:22:44.443 00:22:44.443 00:22:44.443 Commands Supported and Effects 00:22:44.443 ============================== 00:22:44.443 Admin Commands 00:22:44.443 -------------- 00:22:44.443 Get Log Page (02h): Supported 00:22:44.443 Identify (06h): Supported 00:22:44.443 Abort (08h): Supported 00:22:44.443 Set Features (09h): Supported 00:22:44.443 Get Features (0Ah): Supported 00:22:44.443 Asynchronous Event Request (0Ch): Supported 00:22:44.443 Keep Alive (18h): Supported 00:22:44.443 I/O Commands 00:22:44.443 ------------ 00:22:44.443 Flush (00h): Supported LBA-Change 00:22:44.443 Write (01h): Supported LBA-Change 00:22:44.443 Read (02h): Supported 00:22:44.443 Compare (05h): Supported 00:22:44.443 Write Zeroes (08h): Supported LBA-Change 00:22:44.443 Dataset Management (09h): Supported LBA-Change 00:22:44.443 Copy (19h): Supported LBA-Change 00:22:44.443 00:22:44.443 Error Log 00:22:44.443 ========= 00:22:44.443 00:22:44.443 Arbitration 00:22:44.443 =========== 00:22:44.443 Arbitration Burst: 1 00:22:44.443 00:22:44.443 Power Management 00:22:44.443 ================ 00:22:44.443 Number of Power States: 1 00:22:44.443 Current Power State: Power State #0 00:22:44.443 Power State #0: 00:22:44.443 Max Power: 0.00 W 00:22:44.443 Non-Operational State: Operational 00:22:44.443 Entry Latency: Not Reported 00:22:44.443 Exit Latency: Not Reported 00:22:44.443 Relative Read Throughput: 0 00:22:44.443 Relative Read Latency: 0 00:22:44.443 Relative Write Throughput: 0 00:22:44.443 Relative Write Latency: 0 00:22:44.443 Idle Power: Not Reported 00:22:44.443 Active Power: Not Reported 00:22:44.443 Non-Operational Permissive Mode: Not Supported 00:22:44.443 00:22:44.443 Health Information 00:22:44.443 ================== 00:22:44.443 Critical Warnings: 00:22:44.443 Available Spare Space: OK 00:22:44.443 Temperature: OK 00:22:44.443 Device Reliability: OK 00:22:44.443 Read Only: No 00:22:44.443 Volatile Memory Backup: OK 00:22:44.443 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:44.443 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:44.443 Available Spare: 0% 00:22:44.443 Available Spare Threshold: 0% 00:22:44.443 Life Percentage Used:[2024-07-15 12:57:15.298603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xef5ec0) 00:22:44.443 [2024-07-15 12:57:15.298614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.443 [2024-07-15 12:57:15.298625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf798c0, cid 7, qid 0 00:22:44.443 [2024-07-15 12:57:15.298749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.443 [2024-07-15 12:57:15.298754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.443 [2024-07-15 12:57:15.298757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf798c0) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.298787] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:44.443 [2024-07-15 12:57:15.298796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78e40) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.298801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.443 [2024-07-15 12:57:15.298806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf78fc0) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.298810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.443 [2024-07-15 12:57:15.298814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf79140) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.298818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.443 [2024-07-15 12:57:15.298822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.298826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.443 [2024-07-15 12:57:15.298832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.443 [2024-07-15 12:57:15.298845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.443 [2024-07-15 12:57:15.298855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.443 [2024-07-15 12:57:15.298949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.443 [2024-07-15 12:57:15.298955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.443 [2024-07-15 12:57:15.298957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.298966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.298973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.443 [2024-07-15 12:57:15.298978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.443 [2024-07-15 12:57:15.298990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.443 [2024-07-15 12:57:15.299074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.443 [2024-07-15 12:57:15.299079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.443 [2024-07-15 12:57:15.299082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.299085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.443 [2024-07-15 12:57:15.299089] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:44.443 [2024-07-15 12:57:15.299093] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:44.443 [2024-07-15 12:57:15.299101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.299105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.443 [2024-07-15 12:57:15.299108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.443 [2024-07-15 12:57:15.299114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.443 [2024-07-15 12:57:15.299122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.299199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.299204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.299207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.299218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.299236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.299246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.299352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.299357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.299360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.299371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.299383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.299392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.299502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.299508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.299511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.299522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.299534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.299545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.299609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.299615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.299618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.299629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.299641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.299650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.299753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.299759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.299761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.299773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.299785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.299793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.299904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.299910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.299913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.299923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.299930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.299936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.299944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.300057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.300062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.300065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.300068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.300076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.300080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.300083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.300088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.300097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.300180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.300185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.300188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.300191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.300201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.300204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.300207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.300212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.300221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.304235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.304242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.304245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.304248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.304257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.304260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.304263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef5ec0) 00:22:44.444 [2024-07-15 12:57:15.304269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.444 [2024-07-15 12:57:15.304281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf792c0, cid 3, qid 0 00:22:44.444 [2024-07-15 12:57:15.304424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:44.444 [2024-07-15 12:57:15.304429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:44.444 [2024-07-15 12:57:15.304432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:44.444 [2024-07-15 12:57:15.304435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf792c0) on tqpair=0xef5ec0 00:22:44.444 [2024-07-15 12:57:15.304441] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:44.444 0% 00:22:44.444 Data Units Read: 0 00:22:44.444 Data Units Written: 0 00:22:44.444 Host Read Commands: 0 00:22:44.444 Host Write Commands: 0 00:22:44.444 Controller Busy Time: 0 minutes 00:22:44.444 Power Cycles: 0 00:22:44.444 Power On Hours: 0 hours 00:22:44.444 Unsafe Shutdowns: 0 00:22:44.444 Unrecoverable Media Errors: 0 00:22:44.444 Lifetime Error Log Entries: 0 00:22:44.444 Warning Temperature Time: 0 minutes 00:22:44.444 Critical Temperature Time: 0 minutes 00:22:44.444 00:22:44.444 Number of Queues 00:22:44.444 ================ 00:22:44.444 Number of I/O Submission Queues: 127 00:22:44.444 Number of I/O Completion Queues: 127 00:22:44.444 00:22:44.444 Active Namespaces 00:22:44.444 ================= 00:22:44.444 Namespace ID:1 00:22:44.444 Error Recovery Timeout: Unlimited 00:22:44.444 Command Set Identifier: NVM (00h) 00:22:44.444 Deallocate: Supported 00:22:44.444 Deallocated/Unwritten Error: Not Supported 00:22:44.444 Deallocated Read Value: Unknown 00:22:44.444 Deallocate in Write Zeroes: Not Supported 00:22:44.444 Deallocated Guard Field: 0xFFFF 00:22:44.444 Flush: Supported 00:22:44.444 Reservation: Supported 00:22:44.444 Namespace Sharing Capabilities: Multiple Controllers 00:22:44.444 Size (in LBAs): 131072 (0GiB) 00:22:44.444 Capacity (in LBAs): 131072 (0GiB) 00:22:44.444 Utilization (in LBAs): 131072 (0GiB) 00:22:44.444 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:44.444 EUI64: ABCDEF0123456789 00:22:44.444 UUID: 741c73b2-98d8-4ac6-babd-c9441778ec45 00:22:44.444 Thin Provisioning: Not Supported 00:22:44.444 Per-NS Atomic Units: Yes 00:22:44.444 Atomic Boundary Size (Normal): 0 00:22:44.444 Atomic Boundary Size (PFail): 0 00:22:44.444 Atomic Boundary Offset: 0 00:22:44.444 Maximum Single Source Range Length: 65535 00:22:44.444 Maximum Copy Length: 65535 00:22:44.444 Maximum Source Range Count: 1 00:22:44.444 NGUID/EUI64 Never Reused: No 00:22:44.444 Namespace Write Protected: No 00:22:44.444 Number of LBA Formats: 1 00:22:44.444 Current LBA Format: LBA Format #00 00:22:44.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:44.445 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.445 rmmod nvme_tcp 00:22:44.445 rmmod nvme_fabrics 00:22:44.445 rmmod nvme_keyring 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1796962 ']' 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1796962 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1796962 ']' 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1796962 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:44.445 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1796962 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1796962' 00:22:44.705 killing process with pid 1796962 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1796962 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1796962 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.705 12:57:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.246 12:57:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.246 00:22:47.246 real 0m9.606s 00:22:47.246 user 0m7.648s 00:22:47.246 sys 0m4.791s 00:22:47.246 12:57:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.246 12:57:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:47.246 ************************************ 00:22:47.246 END TEST nvmf_identify 00:22:47.246 ************************************ 00:22:47.246 12:57:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:47.246 12:57:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:47.246 12:57:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.246 12:57:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.246 12:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.246 ************************************ 00:22:47.246 START TEST nvmf_perf 00:22:47.246 ************************************ 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:47.246 * Looking for test storage... 00:22:47.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.246 12:57:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:52.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:52.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:52.559 Found net devices under 0000:86:00.0: cvl_0_0 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:52.559 Found net devices under 0000:86:00.1: cvl_0_1 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.559 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:22:52.819 00:22:52.819 --- 10.0.0.2 ping statistics --- 00:22:52.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.819 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:22:52.819 00:22:52.819 --- 10.0.0.1 ping statistics --- 00:22:52.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.819 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1800725 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1800725 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1800725 ']' 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.819 12:57:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.819 [2024-07-15 12:57:23.741560] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:52.819 [2024-07-15 12:57:23.741606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.819 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.079 [2024-07-15 12:57:23.813914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.079 [2024-07-15 12:57:23.894993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.079 [2024-07-15 12:57:23.895028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.079 [2024-07-15 12:57:23.895035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.079 [2024-07-15 12:57:23.895041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.079 [2024-07-15 12:57:23.895046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.079 [2024-07-15 12:57:23.895091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.079 [2024-07-15 12:57:23.895201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.079 [2024-07-15 12:57:23.895240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.079 [2024-07-15 12:57:23.895242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:53.648 12:57:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:56.934 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:56.934 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:56.934 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:56.934 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:57.192 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:57.192 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:57.192 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:57.192 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:57.192 12:57:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:57.504 [2024-07-15 12:57:28.152408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.504 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.504 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:57.504 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.761 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:57.762 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:58.019 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.019 [2024-07-15 12:57:28.915286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.019 12:57:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:58.278 12:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:58.278 12:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:58.278 12:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:58.278 12:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:59.652 Initializing NVMe Controllers 00:22:59.652 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:59.652 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:59.652 Initialization complete. Launching workers. 00:22:59.652 ======================================================== 00:22:59.652 Latency(us) 00:22:59.652 Device Information : IOPS MiB/s Average min max 00:22:59.652 PCIE (0000:5e:00.0) NSID 1 from core 0: 97175.39 379.59 328.86 35.75 7215.14 00:22:59.652 ======================================================== 00:22:59.652 Total : 97175.39 379.59 328.86 35.75 7215.14 00:22:59.652 00:22:59.652 12:57:30 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.652 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.587 Initializing NVMe Controllers 00:23:00.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.587 Initialization complete. Launching workers. 00:23:00.587 ======================================================== 00:23:00.587 Latency(us) 00:23:00.587 Device Information : IOPS MiB/s Average min max 00:23:00.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.00 0.28 14333.67 123.32 44961.28 00:23:00.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18026.78 7963.25 47885.27 00:23:00.587 ======================================================== 00:23:00.587 Total : 127.00 0.50 15962.13 123.32 47885.27 00:23:00.587 00:23:00.587 12:57:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.845 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.781 Initializing NVMe Controllers 00:23:01.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.781 Initialization complete. Launching workers. 00:23:01.781 ======================================================== 00:23:01.781 Latency(us) 00:23:01.781 Device Information : IOPS MiB/s Average min max 00:23:01.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10989.83 42.93 2911.80 391.51 7059.48 00:23:01.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3894.41 15.21 8246.56 4989.76 15734.55 00:23:01.781 ======================================================== 00:23:01.781 Total : 14884.24 58.14 4307.62 391.51 15734.55 00:23:01.781 00:23:01.781 12:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:01.781 12:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:01.781 12:57:32 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:02.040 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.573 Initializing NVMe Controllers 00:23:04.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.573 Controller IO queue size 128, less than required. 00:23:04.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.573 Controller IO queue size 128, less than required. 00:23:04.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:04.573 Initialization complete. Launching workers. 00:23:04.573 ======================================================== 00:23:04.573 Latency(us) 00:23:04.573 Device Information : IOPS MiB/s Average min max 00:23:04.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1530.00 382.50 85656.87 49260.80 126873.53 00:23:04.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.00 152.25 220009.17 71278.43 355272.25 00:23:04.573 ======================================================== 00:23:04.573 Total : 2138.99 534.75 123908.65 49260.80 355272.25 00:23:04.573 00:23:04.573 12:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:04.573 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.573 No valid NVMe controllers or AIO or URING devices found 00:23:04.573 Initializing NVMe Controllers 00:23:04.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.573 Controller IO queue size 128, less than required. 00:23:04.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.573 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:04.573 Controller IO queue size 128, less than required. 00:23:04.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.573 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:04.573 WARNING: Some requested NVMe devices were skipped 00:23:04.573 12:57:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:04.573 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.107 Initializing NVMe Controllers 00:23:07.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.107 Controller IO queue size 128, less than required. 00:23:07.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.107 Controller IO queue size 128, less than required. 00:23:07.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.107 Initialization complete. Launching workers. 00:23:07.107 00:23:07.107 ==================== 00:23:07.107 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:07.107 TCP transport: 00:23:07.107 polls: 23731 00:23:07.107 idle_polls: 13092 00:23:07.107 sock_completions: 10639 00:23:07.107 nvme_completions: 6303 00:23:07.107 submitted_requests: 9382 00:23:07.107 queued_requests: 1 00:23:07.107 00:23:07.107 ==================== 00:23:07.107 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:07.107 TCP transport: 00:23:07.107 polls: 22116 00:23:07.107 idle_polls: 12115 00:23:07.107 sock_completions: 10001 00:23:07.107 nvme_completions: 6341 00:23:07.107 submitted_requests: 9490 00:23:07.107 queued_requests: 1 00:23:07.107 ======================================================== 00:23:07.107 Latency(us) 00:23:07.107 Device Information : IOPS MiB/s Average min max 00:23:07.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1573.84 393.46 83906.20 48510.76 132623.15 00:23:07.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1583.33 395.83 81224.86 40340.66 108173.59 00:23:07.107 ======================================================== 00:23:07.107 Total : 3157.17 789.29 82561.50 40340.66 132623.15 00:23:07.107 00:23:07.107 12:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:07.107 12:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.107 rmmod nvme_tcp 00:23:07.107 rmmod nvme_fabrics 00:23:07.107 rmmod nvme_keyring 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1800725 ']' 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1800725 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1800725 ']' 00:23:07.107 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1800725 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1800725 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1800725' 00:23:07.365 killing process with pid 1800725 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1800725 00:23:07.365 12:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1800725 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.742 12:57:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.325 12:57:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.325 00:23:11.325 real 0m23.878s 00:23:11.325 user 1m2.811s 00:23:11.325 sys 0m7.566s 00:23:11.325 12:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.325 12:57:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:11.325 ************************************ 00:23:11.325 END TEST nvmf_perf 00:23:11.325 ************************************ 00:23:11.325 12:57:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:11.325 12:57:41 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:11.325 12:57:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:11.325 12:57:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.325 12:57:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:11.325 ************************************ 00:23:11.325 START TEST nvmf_fio_host 00:23:11.325 ************************************ 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:11.325 * Looking for test storage... 00:23:11.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.325 12:57:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.614 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:16.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:16.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:16.615 Found net devices under 0000:86:00.0: cvl_0_0 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:16.615 Found net devices under 0000:86:00.1: cvl_0_1 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.615 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:23:16.874 00:23:16.874 --- 10.0.0.2 ping statistics --- 00:23:16.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.874 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:23:16.874 00:23:16.874 --- 10.0.0.1 ping statistics --- 00:23:16.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.874 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1806824 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1806824 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1806824 ']' 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.874 12:57:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.874 [2024-07-15 12:57:47.668251] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:16.874 [2024-07-15 12:57:47.668295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.874 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.874 [2024-07-15 12:57:47.735721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.874 [2024-07-15 12:57:47.815610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.874 [2024-07-15 12:57:47.815644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.874 [2024-07-15 12:57:47.815651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.874 [2024-07-15 12:57:47.815657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.874 [2024-07-15 12:57:47.815662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.874 [2024-07-15 12:57:47.815712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.874 [2024-07-15 12:57:47.815820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.874 [2024-07-15 12:57:47.815921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.875 [2024-07-15 12:57:47.815921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:17.809 [2024-07-15 12:57:48.635511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.809 12:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:18.068 Malloc1 00:23:18.068 12:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.328 12:57:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.328 12:57:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.586 [2024-07-15 12:57:49.397903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.586 12:57:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:18.844 12:57:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:19.103 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:19.103 fio-3.35 00:23:19.103 Starting 1 thread 00:23:19.103 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.636 00:23:21.636 test: (groupid=0, jobs=1): err= 0: pid=1807364: Mon Jul 15 12:57:52 2024 00:23:21.636 read: IOPS=11.8k, BW=45.9MiB/s (48.1MB/s)(92.1MiB/2005msec) 00:23:21.636 slat (nsec): min=1611, max=252137, avg=1746.81, stdev=2277.75 00:23:21.636 clat (usec): min=3108, max=10504, avg=6017.15, stdev=447.62 00:23:21.636 lat (usec): min=3142, max=10506, avg=6018.90, stdev=447.49 00:23:21.636 clat percentiles (usec): 00:23:21.636 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:23:21.636 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:23:21.636 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:23:21.636 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 9896], 00:23:21.636 | 99.99th=[10421] 00:23:21.636 bw ( KiB/s): min=46040, max=47696, per=100.00%, avg=47018.00, stdev=716.74, samples=4 00:23:21.636 iops : min=11510, max=11924, avg=11754.50, stdev=179.19, samples=4 00:23:21.636 write: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.5MiB/2005msec); 0 zone resets 00:23:21.636 slat (nsec): min=1660, max=228995, avg=1836.05, stdev=1655.64 00:23:21.636 clat (usec): min=2461, max=9968, avg=4854.34, stdev=377.37 00:23:21.636 lat (usec): min=2476, max=9970, avg=4856.17, stdev=377.31 00:23:21.636 clat percentiles (usec): 00:23:21.636 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:23:21.636 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:23:21.636 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:21.636 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7963], 99.95th=[ 8717], 00:23:21.636 | 99.99th=[ 9896] 00:23:21.636 bw ( KiB/s): min=46424, max=47168, per=99.95%, avg=46726.00, stdev=325.58, samples=4 00:23:21.636 iops : min=11606, max=11792, avg=11681.50, stdev=81.39, samples=4 00:23:21.636 lat (msec) : 4=0.53%, 10=99.45%, 20=0.02% 00:23:21.636 cpu : usr=72.36%, sys=25.15%, ctx=61, majf=0, minf=6 00:23:21.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:21.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:21.636 issued rwts: total=23567,23432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.636 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:21.636 00:23:21.636 Run status group 0 (all jobs): 00:23:21.636 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.1MiB (96.5MB), run=2005-2005msec 00:23:21.636 WRITE: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.5MiB (96.0MB), run=2005-2005msec 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:21.636 12:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:21.636 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:21.636 fio-3.35 00:23:21.636 Starting 1 thread 00:23:21.636 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.013 [2024-07-15 12:57:53.838590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b4a0 is same with the state(5) to be set 00:23:23.013 [2024-07-15 12:57:53.838647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b4a0 is same with the state(5) to be set 00:23:23.013 [2024-07-15 12:57:53.838655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4b4a0 is same with the state(5) to be set 00:23:23.949 00:23:23.949 test: (groupid=0, jobs=1): err= 0: pid=1807798: Mon Jul 15 12:57:54 2024 00:23:23.949 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(335MiB/2006msec) 00:23:23.949 slat (nsec): min=2562, max=84361, avg=2841.95, stdev=1223.39 00:23:23.949 clat (usec): min=1659, max=50802, avg=7182.00, stdev=3427.65 00:23:23.949 lat (usec): min=1662, max=50805, avg=7184.84, stdev=3427.69 00:23:23.949 clat percentiles (usec): 00:23:23.949 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 5669], 00:23:23.949 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7373], 00:23:23.949 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8848], 95.00th=[ 9765], 00:23:23.949 | 99.00th=[12518], 99.50th=[44303], 99.90th=[49021], 99.95th=[50594], 00:23:23.949 | 99.99th=[50594] 00:23:23.949 bw ( KiB/s): min=70816, max=97564, per=49.34%, avg=84311.00, stdev=11121.42, samples=4 00:23:23.949 iops : min= 4426, max= 6097, avg=5269.25, stdev=694.79, samples=4 00:23:23.949 write: IOPS=6426, BW=100MiB/s (105MB/s)(173MiB/1719msec); 0 zone resets 00:23:23.949 slat (usec): min=30, max=380, avg=32.16, stdev= 7.05 00:23:23.949 clat (usec): min=3037, max=14395, avg=8555.79, stdev=1475.28 00:23:23.949 lat (usec): min=3067, max=14506, avg=8587.95, stdev=1476.57 00:23:23.949 clat percentiles (usec): 00:23:23.949 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7308], 00:23:23.949 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:23.949 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11338], 00:23:23.949 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13960], 99.95th=[14353], 00:23:23.949 | 99.99th=[14353] 00:23:23.949 bw ( KiB/s): min=74848, max=101716, per=85.45%, avg=87861.00, stdev=11295.32, samples=4 00:23:23.949 iops : min= 4678, max= 6357, avg=5491.25, stdev=705.86, samples=4 00:23:23.949 lat (msec) : 2=0.03%, 4=1.36%, 10=90.50%, 20=7.73%, 50=0.36% 00:23:23.949 lat (msec) : 100=0.03% 00:23:23.949 cpu : usr=85.59%, sys=13.37%, ctx=38, majf=0, minf=3 00:23:23.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:23.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:23.949 issued rwts: total=21422,11047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.949 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:23.949 00:23:23.949 Run status group 0 (all jobs): 00:23:23.949 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=335MiB (351MB), run=2006-2006msec 00:23:23.949 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=173MiB (181MB), run=1719-1719msec 00:23:23.949 12:57:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.208 rmmod nvme_tcp 00:23:24.208 rmmod nvme_fabrics 00:23:24.208 rmmod nvme_keyring 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1806824 ']' 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1806824 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1806824 ']' 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1806824 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1806824 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1806824' 00:23:24.208 killing process with pid 1806824 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1806824 00:23:24.208 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1806824 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.467 12:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.003 12:57:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.003 00:23:27.003 real 0m15.682s 00:23:27.003 user 0m46.355s 00:23:27.003 sys 0m6.368s 00:23:27.003 12:57:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:27.003 12:57:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.003 ************************************ 00:23:27.003 END TEST nvmf_fio_host 00:23:27.003 ************************************ 00:23:27.003 12:57:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:27.003 12:57:57 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:27.003 12:57:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:27.003 12:57:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.003 12:57:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.003 ************************************ 00:23:27.003 START TEST nvmf_failover 00:23:27.003 ************************************ 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:27.003 * Looking for test storage... 00:23:27.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.003 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.004 12:57:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.276 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.277 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.277 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.277 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:23:32.535 00:23:32.535 --- 10.0.0.2 ping statistics --- 00:23:32.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.535 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:23:32.535 00:23:32.535 --- 10.0.0.1 ping statistics --- 00:23:32.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.535 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1811732 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1811732 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1811732 ']' 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.535 12:58:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.535 [2024-07-15 12:58:03.427030] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:32.535 [2024-07-15 12:58:03.427074] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.535 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.873 [2024-07-15 12:58:03.496474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:32.873 [2024-07-15 12:58:03.575409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.873 [2024-07-15 12:58:03.575443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.873 [2024-07-15 12:58:03.575450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.873 [2024-07-15 12:58:03.575456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.873 [2024-07-15 12:58:03.575461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.873 [2024-07-15 12:58:03.575571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.873 [2024-07-15 12:58:03.575675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.873 [2024-07-15 12:58:03.575676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.439 12:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:33.698 [2024-07-15 12:58:04.436485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.698 12:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:33.698 Malloc0 00:23:33.957 12:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.957 12:58:04 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.215 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.474 [2024-07-15 12:58:05.195085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.474 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.474 [2024-07-15 12:58:05.371568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.474 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:34.732 [2024-07-15 12:58:05.544130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1812015 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1812015 /var/tmp/bdevperf.sock 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1812015 ']' 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.732 12:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:35.666 12:58:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.666 12:58:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:35.666 12:58:06 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.923 NVMe0n1 00:23:35.923 12:58:06 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.489 00:23:36.489 12:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1812353 00:23:36.489 12:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.489 12:58:07 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:37.426 12:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.426 [2024-07-15 12:58:08.326742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.326999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.327006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.426 [2024-07-15 12:58:08.327012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 [2024-07-15 12:58:08.327310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6080 is same with the state(5) to be set 00:23:37.427 12:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:40.715 12:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.715 00:23:40.715 12:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.973 12:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:44.258 12:58:14 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.258 [2024-07-15 12:58:14.992038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.258 12:58:15 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:45.193 12:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:45.452 [2024-07-15 12:58:16.195990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.452 [2024-07-15 12:58:16.196202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 [2024-07-15 12:58:16.196527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e7aa0 is same with the state(5) to be set 00:23:45.453 12:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1812353 00:23:52.029 0 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1812015 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1812015 ']' 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1812015 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1812015 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1812015' 00:23:52.029 killing process with pid 1812015 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1812015 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1812015 00:23:52.029 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.029 [2024-07-15 12:58:05.602769] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:52.029 [2024-07-15 12:58:05.602820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812015 ] 00:23:52.029 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.029 [2024-07-15 12:58:05.667374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.029 [2024-07-15 12:58:05.742317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.029 Running I/O for 15 seconds... 00:23:52.029 [2024-07-15 12:58:08.327693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.327988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.327996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.328002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.328010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.328017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.328026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.328032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.029 [2024-07-15 12:58:08.328040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.029 [2024-07-15 12:58:08.328047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.030 [2024-07-15 12:58:08.328666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.030 [2024-07-15 12:58:08.328682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.030 [2024-07-15 12:58:08.328690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.031 [2024-07-15 12:58:08.328697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.031 [2024-07-15 12:58:08.328712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.328990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.328996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.031 [2024-07-15 12:58:08.329293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.031 [2024-07-15 12:58:08.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.032 [2024-07-15 12:58:08.329314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.032 [2024-07-15 12:58:08.329329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.032 [2024-07-15 12:58:08.329344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.032 [2024-07-15 12:58:08.329358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.032 [2024-07-15 12:58:08.329372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.032 [2024-07-15 12:58:08.329387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94752 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94760 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94776 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94784 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94792 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94800 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94808 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94816 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94128 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94136 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94144 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.329740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.329747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.329752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94152 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.329759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.341898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.341911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.341920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94160 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.341929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.341939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.341945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.341953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94168 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.341964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.341973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.032 [2024-07-15 12:58:08.341982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.032 [2024-07-15 12:58:08.341990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94176 len:8 PRP1 0x0 PRP2 0x0 00:23:52.032 [2024-07-15 12:58:08.341998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.342046] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2021300 was disconnected and freed. reset controller. 00:23:52.032 [2024-07-15 12:58:08.342058] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:52.032 [2024-07-15 12:58:08.342082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.032 [2024-07-15 12:58:08.342092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.342102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.032 [2024-07-15 12:58:08.342111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.032 [2024-07-15 12:58:08.342121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.032 [2024-07-15 12:58:08.342130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:08.342139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.033 [2024-07-15 12:58:08.342149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:08.342158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:52.033 [2024-07-15 12:58:08.342193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2003540 (9): Bad file descriptor 00:23:52.033 [2024-07-15 12:58:08.346069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:52.033 [2024-07-15 12:58:08.386207] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:52.033 [2024-07-15 12:58:11.787253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.033 [2024-07-15 12:58:11.787670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.033 [2024-07-15 12:58:11.787725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.033 [2024-07-15 12:58:11.787731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.787989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.787995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.788010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.034 [2024-07-15 12:58:11.788024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.034 [2024-07-15 12:58:11.788325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.034 [2024-07-15 12:58:11.788334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.035 [2024-07-15 12:58:11.788898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.035 [2024-07-15 12:58:11.788906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.788913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.788920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.036 [2024-07-15 12:58:11.788928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.788936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.036 [2024-07-15 12:58:11.788942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.788950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.036 [2024-07-15 12:58:11.788956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.788964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.036 [2024-07-15 12:58:11.788971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.788980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.036 [2024-07-15 12:58:11.788987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.788996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:11.789209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.036 [2024-07-15 12:58:11.789240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.036 [2024-07-15 12:58:11.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:8 PRP1 0x0 PRP2 0x0 00:23:52.036 [2024-07-15 12:58:11.789254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789297] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ce380 was disconnected and freed. reset controller. 00:23:52.036 [2024-07-15 12:58:11.789306] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:52.036 [2024-07-15 12:58:11.789325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.036 [2024-07-15 12:58:11.789335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.036 [2024-07-15 12:58:11.789350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.036 [2024-07-15 12:58:11.789364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.036 [2024-07-15 12:58:11.789378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:11.789385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:52.036 [2024-07-15 12:58:11.792227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:52.036 [2024-07-15 12:58:11.792256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2003540 (9): Bad file descriptor 00:23:52.036 [2024-07-15 12:58:11.868808] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:52.036 [2024-07-15 12:58:16.197992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.036 [2024-07-15 12:58:16.198247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.036 [2024-07-15 12:58:16.198255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.037 [2024-07-15 12:58:16.198741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.037 [2024-07-15 12:58:16.198757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.037 [2024-07-15 12:58:16.198772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.037 [2024-07-15 12:58:16.198786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.037 [2024-07-15 12:58:16.198801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.037 [2024-07-15 12:58:16.198816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.037 [2024-07-15 12:58:16.198833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.037 [2024-07-15 12:58:16.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.198847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.198988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.038 [2024-07-15 12:58:16.198994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.038 [2024-07-15 12:58:16.199344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.038 [2024-07-15 12:58:16.199350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.039 [2024-07-15 12:58:16.199702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33752 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33760 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33768 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33776 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33784 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33800 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33808 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33816 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33824 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.039 [2024-07-15 12:58:16.199971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33832 len:8 PRP1 0x0 PRP2 0x0 00:23:52.039 [2024-07-15 12:58:16.199977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.039 [2024-07-15 12:58:16.199984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.039 [2024-07-15 12:58:16.199989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.040 [2024-07-15 12:58:16.199994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33840 len:8 PRP1 0x0 PRP2 0x0 00:23:52.040 [2024-07-15 12:58:16.200000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.200007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.040 [2024-07-15 12:58:16.200012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.040 [2024-07-15 12:58:16.200017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:23:52.040 [2024-07-15 12:58:16.200023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.200029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.040 [2024-07-15 12:58:16.200034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.040 [2024-07-15 12:58:16.200039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:23:52.040 [2024-07-15 12:58:16.200047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.210936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.040 [2024-07-15 12:58:16.210948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.040 [2024-07-15 12:58:16.210955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:23:52.040 [2024-07-15 12:58:16.210963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.210970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.040 [2024-07-15 12:58:16.210975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.040 [2024-07-15 12:58:16.210981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:23:52.040 [2024-07-15 12:58:16.210988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.211031] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21ce170 was disconnected and freed. reset controller. 00:23:52.040 [2024-07-15 12:58:16.211041] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:52.040 [2024-07-15 12:58:16.211065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.040 [2024-07-15 12:58:16.211074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.211082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.040 [2024-07-15 12:58:16.211089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.211097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.040 [2024-07-15 12:58:16.211104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.211112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.040 [2024-07-15 12:58:16.211119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.040 [2024-07-15 12:58:16.211128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:52.040 [2024-07-15 12:58:16.211159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2003540 (9): Bad file descriptor 00:23:52.040 [2024-07-15 12:58:16.214168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:52.040 [2024-07-15 12:58:16.290917] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:52.040 00:23:52.040 Latency(us) 00:23:52.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.040 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:52.040 Verification LBA range: start 0x0 length 0x4000 00:23:52.040 NVMe0n1 : 15.00 10882.11 42.51 562.82 0.00 11160.71 455.90 20743.57 00:23:52.040 =================================================================================================================== 00:23:52.040 Total : 10882.11 42.51 562.82 0.00 11160.71 455.90 20743.57 00:23:52.040 Received shutdown signal, test time was about 15.000000 seconds 00:23:52.040 00:23:52.040 Latency(us) 00:23:52.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.040 =================================================================================================================== 00:23:52.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1814762 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1814762 /var/tmp/bdevperf.sock 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1814762 ']' 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.040 12:58:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:52.608 12:58:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.608 12:58:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:52.608 12:58:23 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:52.608 [2024-07-15 12:58:23.551533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.866 12:58:23 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:52.866 [2024-07-15 12:58:23.723992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:52.866 12:58:23 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:53.125 NVMe0n1 00:23:53.125 12:58:23 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:53.693 00:23:53.693 12:58:24 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:53.954 00:23:53.954 12:58:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.954 12:58:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:54.248 12:58:24 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.248 12:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:57.541 12:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.541 12:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:57.541 12:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1815768 00:23:57.541 12:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.541 12:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1815768 00:23:58.919 0 00:23:58.919 12:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:58.919 [2024-07-15 12:58:22.592341] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:58.919 [2024-07-15 12:58:22.592391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814762 ] 00:23:58.919 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.919 [2024-07-15 12:58:22.658824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.919 [2024-07-15 12:58:22.728458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.919 [2024-07-15 12:58:25.137131] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:58.919 [2024-07-15 12:58:25.137176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.919 [2024-07-15 12:58:25.137186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.919 [2024-07-15 12:58:25.137195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.919 [2024-07-15 12:58:25.137202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.919 [2024-07-15 12:58:25.137209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.919 [2024-07-15 12:58:25.137216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.919 [2024-07-15 12:58:25.137223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.919 [2024-07-15 12:58:25.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.919 [2024-07-15 12:58:25.137241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.919 [2024-07-15 12:58:25.137265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.919 [2024-07-15 12:58:25.137278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acf540 (9): Bad file descriptor 00:23:58.919 [2024-07-15 12:58:25.231409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:58.919 Running I/O for 1 seconds... 00:23:58.919 00:23:58.919 Latency(us) 00:23:58.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.919 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:58.919 Verification LBA range: start 0x0 length 0x4000 00:23:58.919 NVMe0n1 : 1.01 10846.86 42.37 0.00 0.00 11754.55 1232.36 10599.74 00:23:58.919 =================================================================================================================== 00:23:58.919 Total : 10846.86 42.37 0.00 0.00 11754.55 1232.36 10599.74 00:23:58.919 12:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:58.919 12:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.919 12:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.919 12:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.919 12:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:59.179 12:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.438 12:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1814762 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1814762 ']' 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1814762 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1814762 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1814762' 00:24:02.727 killing process with pid 1814762 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1814762 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1814762 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:02.727 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.985 rmmod nvme_tcp 00:24:02.985 rmmod nvme_fabrics 00:24:02.985 rmmod nvme_keyring 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1811732 ']' 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1811732 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1811732 ']' 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1811732 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.985 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1811732 00:24:02.986 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:02.986 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:02.986 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1811732' 00:24:02.986 killing process with pid 1811732 00:24:02.986 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1811732 00:24:02.986 12:58:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1811732 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.244 12:58:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.777 12:58:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:05.777 00:24:05.777 real 0m38.724s 00:24:05.777 user 2m4.005s 00:24:05.777 sys 0m7.636s 00:24:05.777 12:58:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:05.777 12:58:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.777 ************************************ 00:24:05.777 END TEST nvmf_failover 00:24:05.777 ************************************ 00:24:05.777 12:58:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:05.777 12:58:36 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:05.777 12:58:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:05.777 12:58:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.777 12:58:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.777 ************************************ 00:24:05.777 START TEST nvmf_host_discovery 00:24:05.777 ************************************ 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:05.777 * Looking for test storage... 00:24:05.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.777 12:58:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.054 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:11.055 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:11.055 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:11.055 Found net devices under 0000:86:00.0: cvl_0_0 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:11.055 Found net devices under 0000:86:00.1: cvl_0_1 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.055 12:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:24:11.314 00:24:11.314 --- 10.0.0.2 ping statistics --- 00:24:11.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.314 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:24:11.314 00:24:11.314 --- 10.0.0.1 ping statistics --- 00:24:11.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.314 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1820124 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1820124 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1820124 ']' 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.314 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.314 [2024-07-15 12:58:42.156519] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:11.314 [2024-07-15 12:58:42.156575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.314 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.314 [2024-07-15 12:58:42.227902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.572 [2024-07-15 12:58:42.307976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.572 [2024-07-15 12:58:42.308009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.572 [2024-07-15 12:58:42.308015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.572 [2024-07-15 12:58:42.308022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.572 [2024-07-15 12:58:42.308027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.572 [2024-07-15 12:58:42.308042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.140 12:58:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.140 [2024-07-15 12:58:43.003048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.140 [2024-07-15 12:58:43.015175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.140 null0 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.140 null1 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:12.140 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1820370 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1820370 /tmp/host.sock 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1820370 ']' 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:12.141 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.141 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.141 [2024-07-15 12:58:43.092144] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:12.141 [2024-07-15 12:58:43.092186] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820370 ] 00:24:12.404 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.405 [2024-07-15 12:58:43.159110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.405 [2024-07-15 12:58:43.238274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.974 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.234 12:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.234 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.495 [2024-07-15 12:58:44.234394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:13.495 12:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:14.084 [2024-07-15 12:58:44.962304] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:14.084 [2024-07-15 12:58:44.962324] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:14.084 [2024-07-15 12:58:44.962338] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:14.343 [2024-07-15 12:58:45.048608] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:14.343 [2024-07-15 12:58:45.146590] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:14.343 [2024-07-15 12:58:45.146608] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.602 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:14.860 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:14.861 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:15.118 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.119 [2024-07-15 12:58:45.862797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:15.119 [2024-07-15 12:58:45.863695] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:15.119 [2024-07-15 12:58:45.863717] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.119 [2024-07-15 12:58:45.950286] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:15.119 12:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.119 12:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:15.119 12:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:15.402 [2024-07-15 12:58:46.254599] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:15.402 [2024-07-15 12:58:46.254617] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:15.402 [2024-07-15 12:58:46.254622] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:16.347 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.348 [2024-07-15 12:58:47.103003] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:16.348 [2024-07-15 12:58:47.103025] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:16.348 [2024-07-15 12:58:47.104558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.348 [2024-07-15 12:58:47.104575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.348 [2024-07-15 12:58:47.104584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.348 [2024-07-15 12:58:47.104591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.348 [2024-07-15 12:58:47.104599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.348 [2024-07-15 12:58:47.104606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.348 [2024-07-15 12:58:47.104613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.348 [2024-07-15 12:58:47.104619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.348 [2024-07-15 12:58:47.104626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:16.348 [2024-07-15 12:58:47.114570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.348 [2024-07-15 12:58:47.124609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.348 [2024-07-15 12:58:47.124764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.348 [2024-07-15 12:58:47.124779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.348 [2024-07-15 12:58:47.124789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.348 [2024-07-15 12:58:47.124800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.348 [2024-07-15 12:58:47.124810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.348 [2024-07-15 12:58:47.124817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.348 [2024-07-15 12:58:47.124824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.348 [2024-07-15 12:58:47.124835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.348 [2024-07-15 12:58:47.134666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.348 [2024-07-15 12:58:47.134843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.348 [2024-07-15 12:58:47.134856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.348 [2024-07-15 12:58:47.134863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.348 [2024-07-15 12:58:47.134874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.348 [2024-07-15 12:58:47.134883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.348 [2024-07-15 12:58:47.134890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.348 [2024-07-15 12:58:47.134896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.348 [2024-07-15 12:58:47.134906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.348 [2024-07-15 12:58:47.144717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.348 [2024-07-15 12:58:47.144903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.348 [2024-07-15 12:58:47.144915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.348 [2024-07-15 12:58:47.144922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.348 [2024-07-15 12:58:47.144933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.348 [2024-07-15 12:58:47.144942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.348 [2024-07-15 12:58:47.144948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.348 [2024-07-15 12:58:47.144955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.348 [2024-07-15 12:58:47.144964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.348 [2024-07-15 12:58:47.154767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.348 [2024-07-15 12:58:47.154970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.348 [2024-07-15 12:58:47.154983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.348 [2024-07-15 12:58:47.154991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.348 [2024-07-15 12:58:47.155002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.348 [2024-07-15 12:58:47.155012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.348 [2024-07-15 12:58:47.155018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.348 [2024-07-15 12:58:47.155024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.348 [2024-07-15 12:58:47.155034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.348 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:16.348 [2024-07-15 12:58:47.164824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.348 [2024-07-15 12:58:47.165029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.348 [2024-07-15 12:58:47.165040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.348 [2024-07-15 12:58:47.165047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.348 [2024-07-15 12:58:47.165057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.348 [2024-07-15 12:58:47.165067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.348 [2024-07-15 12:58:47.165073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.348 [2024-07-15 12:58:47.165080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.348 [2024-07-15 12:58:47.165089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.348 [2024-07-15 12:58:47.174874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.348 [2024-07-15 12:58:47.175087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.349 [2024-07-15 12:58:47.175100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.349 [2024-07-15 12:58:47.175111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.349 [2024-07-15 12:58:47.175122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.349 [2024-07-15 12:58:47.175131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.349 [2024-07-15 12:58:47.175137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.349 [2024-07-15 12:58:47.175143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.349 [2024-07-15 12:58:47.175153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.349 [2024-07-15 12:58:47.184927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.349 [2024-07-15 12:58:47.185055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.349 [2024-07-15 12:58:47.185067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.349 [2024-07-15 12:58:47.185074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.349 [2024-07-15 12:58:47.185084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.349 [2024-07-15 12:58:47.185094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.349 [2024-07-15 12:58:47.185099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.349 [2024-07-15 12:58:47.185106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.349 [2024-07-15 12:58:47.185115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.349 [2024-07-15 12:58:47.194977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.349 [2024-07-15 12:58:47.195155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.349 [2024-07-15 12:58:47.195168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.349 [2024-07-15 12:58:47.195176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.349 [2024-07-15 12:58:47.195186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.349 [2024-07-15 12:58:47.195196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.349 [2024-07-15 12:58:47.195202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.349 [2024-07-15 12:58:47.195208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.349 [2024-07-15 12:58:47.195217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.349 [2024-07-15 12:58:47.205030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.349 [2024-07-15 12:58:47.205309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.349 [2024-07-15 12:58:47.205322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.349 [2024-07-15 12:58:47.205330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.349 [2024-07-15 12:58:47.205340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.349 [2024-07-15 12:58:47.205363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.349 [2024-07-15 12:58:47.205373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.349 [2024-07-15 12:58:47.205379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.349 [2024-07-15 12:58:47.205389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:16.349 [2024-07-15 12:58:47.215081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.349 [2024-07-15 12:58:47.215357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.349 [2024-07-15 12:58:47.215370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.349 [2024-07-15 12:58:47.215378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.349 [2024-07-15 12:58:47.215389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.349 [2024-07-15 12:58:47.215407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.349 [2024-07-15 12:58:47.215416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.349 [2024-07-15 12:58:47.215423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.349 [2024-07-15 12:58:47.215435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.349 [2024-07-15 12:58:47.225135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.349 [2024-07-15 12:58:47.225387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.349 [2024-07-15 12:58:47.225399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55f10 with addr=10.0.0.2, port=4420 00:24:16.349 [2024-07-15 12:58:47.225406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55f10 is same with the state(5) to be set 00:24:16.349 [2024-07-15 12:58:47.225416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55f10 (9): Bad file descriptor 00:24:16.349 [2024-07-15 12:58:47.225432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.349 [2024-07-15 12:58:47.225439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.349 [2024-07-15 12:58:47.225449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.349 [2024-07-15 12:58:47.225458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.349 [2024-07-15 12:58:47.230548] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:16.349 [2024-07-15 12:58:47.230565] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:24:16.349 12:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:17.725 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:17.725 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:17.725 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:17.725 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.726 12:58:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.662 [2024-07-15 12:58:49.584809] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:18.662 [2024-07-15 12:58:49.584826] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:18.662 [2024-07-15 12:58:49.584838] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.920 [2024-07-15 12:58:49.713236] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:19.179 [2024-07-15 12:58:49.939998] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:19.179 [2024-07-15 12:58:49.940025] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 request: 00:24:19.179 { 00:24:19.179 "name": "nvme", 00:24:19.179 "trtype": "tcp", 00:24:19.179 "traddr": "10.0.0.2", 00:24:19.179 "adrfam": "ipv4", 00:24:19.179 "trsvcid": "8009", 00:24:19.179 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:19.179 "wait_for_attach": true, 00:24:19.179 "method": "bdev_nvme_start_discovery", 00:24:19.179 "req_id": 1 00:24:19.179 } 00:24:19.179 Got JSON-RPC error response 00:24:19.179 response: 00:24:19.179 { 00:24:19.179 "code": -17, 00:24:19.179 "message": "File exists" 00:24:19.179 } 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:19.179 12:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 request: 00:24:19.179 { 00:24:19.179 "name": "nvme_second", 00:24:19.179 "trtype": "tcp", 00:24:19.179 "traddr": "10.0.0.2", 00:24:19.179 "adrfam": "ipv4", 00:24:19.179 "trsvcid": "8009", 00:24:19.179 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:19.179 "wait_for_attach": true, 00:24:19.179 "method": "bdev_nvme_start_discovery", 00:24:19.179 "req_id": 1 00:24:19.179 } 00:24:19.179 Got JSON-RPC error response 00:24:19.179 response: 00:24:19.179 { 00:24:19.179 "code": -17, 00:24:19.179 "message": "File exists" 00:24:19.179 } 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.438 12:58:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.374 [2024-07-15 12:58:51.188125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.374 [2024-07-15 12:58:51.188155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6f880 with addr=10.0.0.2, port=8010 00:24:20.374 [2024-07-15 12:58:51.188171] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:20.374 [2024-07-15 12:58:51.188177] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:20.374 [2024-07-15 12:58:51.188184] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:21.310 [2024-07-15 12:58:52.190656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.310 [2024-07-15 12:58:52.190680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e92a00 with addr=10.0.0.2, port=8010 00:24:21.310 [2024-07-15 12:58:52.190691] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:21.310 [2024-07-15 12:58:52.190697] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:21.310 [2024-07-15 12:58:52.190719] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:22.242 [2024-07-15 12:58:53.192821] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:22.242 request: 00:24:22.242 { 00:24:22.242 "name": "nvme_second", 00:24:22.242 "trtype": "tcp", 00:24:22.242 "traddr": "10.0.0.2", 00:24:22.242 "adrfam": "ipv4", 00:24:22.242 "trsvcid": "8010", 00:24:22.242 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.242 "wait_for_attach": false, 00:24:22.242 "attach_timeout_ms": 3000, 00:24:22.242 "method": "bdev_nvme_start_discovery", 00:24:22.242 "req_id": 1 00:24:22.242 } 00:24:22.242 Got JSON-RPC error response 00:24:22.242 response: 00:24:22.242 { 00:24:22.242 "code": -110, 00:24:22.242 "message": "Connection timed out" 00:24:22.242 } 00:24:22.242 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1820370 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.500 rmmod nvme_tcp 00:24:22.500 rmmod nvme_fabrics 00:24:22.500 rmmod nvme_keyring 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1820124 ']' 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1820124 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1820124 ']' 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1820124 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1820124 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1820124' 00:24:22.500 killing process with pid 1820124 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1820124 00:24:22.500 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1820124 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.758 12:58:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.291 12:58:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.291 00:24:25.291 real 0m19.344s 00:24:25.291 user 0m24.903s 00:24:25.291 sys 0m5.793s 00:24:25.291 12:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.291 12:58:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.291 ************************************ 00:24:25.291 END TEST nvmf_host_discovery 00:24:25.291 ************************************ 00:24:25.291 12:58:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.292 12:58:55 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:25.292 12:58:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.292 12:58:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.292 12:58:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.292 ************************************ 00:24:25.292 START TEST nvmf_host_multipath_status 00:24:25.292 ************************************ 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:25.292 * Looking for test storage... 00:24:25.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.292 12:58:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.566 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.566 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.567 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.567 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.826 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.826 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:24:30.826 00:24:30.826 --- 10.0.0.2 ping statistics --- 00:24:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.826 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:30.826 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:30.826 00:24:30.826 --- 10.0.0.1 ping statistics --- 00:24:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.826 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1825674 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1825674 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1825674 ']' 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.827 12:59:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:30.827 [2024-07-15 12:59:01.631962] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:30.827 [2024-07-15 12:59:01.632004] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.827 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.827 [2024-07-15 12:59:01.699364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:30.827 [2024-07-15 12:59:01.778215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.827 [2024-07-15 12:59:01.778253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.827 [2024-07-15 12:59:01.778260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.827 [2024-07-15 12:59:01.778266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.827 [2024-07-15 12:59:01.778272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.827 [2024-07-15 12:59:01.778313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.827 [2024-07-15 12:59:01.778314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1825674 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:31.764 [2024-07-15 12:59:02.629973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.764 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:32.023 Malloc0 00:24:32.023 12:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:32.282 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.540 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.540 [2024-07-15 12:59:03.391749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.540 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:32.799 [2024-07-15 12:59:03.560209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1825938 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1825938 /var/tmp/bdevperf.sock 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1825938 ']' 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.800 12:59:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 12:59:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.737 12:59:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:33.737 12:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:33.737 12:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:33.995 Nvme0n1 00:24:33.995 12:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:34.563 Nvme0n1 00:24:34.563 12:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:34.563 12:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:36.548 12:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:36.548 12:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:36.807 12:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:37.065 12:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:38.002 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:38.002 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.002 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.002 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:38.262 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.262 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:38.262 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.262 12:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:38.262 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.262 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:38.262 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.262 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.521 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.521 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.521 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.521 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.780 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.780 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.780 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.780 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.780 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.780 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:39.040 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.040 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:39.040 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.040 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:39.040 12:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:39.298 12:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:39.557 12:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:40.492 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:40.492 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:40.492 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.492 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.750 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.750 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:40.750 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.751 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.751 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.751 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.751 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.751 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.009 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.009 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.009 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.009 12:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.267 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.267 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.267 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.267 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:41.525 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:41.783 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:42.042 12:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:42.979 12:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:42.979 12:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:42.979 12:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.979 12:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.238 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.238 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:43.238 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.238 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.497 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.755 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.755 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.755 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.755 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.013 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.013 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.013 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.013 12:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.272 12:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.272 12:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:44.272 12:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.531 12:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:44.531 12:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.907 12:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.164 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.164 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.164 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.164 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.433 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.433 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.433 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.433 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:46.691 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:46.949 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:47.207 12:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:48.140 12:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:48.140 12:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:48.140 12:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.140 12:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.398 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.398 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.398 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.398 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.398 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.398 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.399 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.399 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.657 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.657 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.657 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.658 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:48.916 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.916 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:48.916 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.916 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.175 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.175 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.175 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.175 12:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.175 12:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.175 12:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:49.175 12:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:49.434 12:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:49.693 12:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:50.710 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:50.710 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:50.710 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.710 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:50.710 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.710 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:50.968 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.968 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:50.968 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.968 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:50.968 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.968 12:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.226 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.226 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.226 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.226 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.485 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:51.776 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.776 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:52.034 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:52.034 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:52.034 12:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.293 12:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:53.233 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:53.233 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:53.233 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.233 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.495 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.495 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:53.495 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:53.495 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.753 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.753 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:53.753 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:53.753 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.011 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.012 12:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.270 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.270 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.271 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.271 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.529 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.529 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:54.529 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:54.788 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:54.788 12:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.164 12:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.164 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.164 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.164 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.164 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.423 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.423 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:56.423 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.423 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:56.681 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.682 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:56.682 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.682 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:56.941 12:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.200 12:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:57.458 12:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:58.396 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:58.396 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:58.396 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.396 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.654 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.654 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:58.654 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.654 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.913 12:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.171 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.171 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.171 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.171 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.430 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.430 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:59.430 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.430 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.689 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.689 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:59.689 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:59.948 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:59.948 12:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:01.327 12:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:01.327 12:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:01.327 12:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.327 12:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.327 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.586 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.586 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.586 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.586 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.845 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.845 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:01.845 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.845 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.104 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.104 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:02.104 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.104 12:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1825938 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1825938 ']' 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1825938 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.104 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1825938 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1825938' 00:25:02.367 killing process with pid 1825938 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1825938 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1825938 00:25:02.367 Connection closed with partial response: 00:25:02.367 00:25:02.367 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1825938 00:25:02.367 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.367 [2024-07-15 12:59:03.634048] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:02.367 [2024-07-15 12:59:03.634097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825938 ] 00:25:02.367 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.367 [2024-07-15 12:59:03.700488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.367 [2024-07-15 12:59:03.779570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.367 Running I/O for 90 seconds... 00:25:02.367 [2024-07-15 12:59:17.764927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.764969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.367 [2024-07-15 12:59:17.765821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:02.367 [2024-07-15 12:59:17.765835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.765981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.765997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.368 [2024-07-15 12:59:17.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.368 [2024-07-15 12:59:17.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.368 [2024-07-15 12:59:17.766670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.368 [2024-07-15 12:59:17.766677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.369 [2024-07-15 12:59:17.766697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.369 [2024-07-15 12:59:17.766716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.369 [2024-07-15 12:59:17.766738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.369 [2024-07-15 12:59:17.766853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.369 [2024-07-15 12:59:17.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.766900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.766923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.766946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.766969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.766985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.766992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:02.369 [2024-07-15 12:59:17.767815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.369 [2024-07-15 12:59:17.767821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.767990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.767997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:17.768360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:17.768529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.370 [2024-07-15 12:59:17.768536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.823725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.823732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.824720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.824738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.824755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.824774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.824781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.824794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.824801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 12:59:30.824813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 12:59:30.824820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.824990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.824997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.371 [2024-07-15 12:59:30.825453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:02.371 [2024-07-15 12:59:30.825465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.825658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.825665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:02.372 [2024-07-15 12:59:30.826675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.372 [2024-07-15 12:59:30.826693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:02.372 Received shutdown signal, test time was about 27.551658 seconds 00:25:02.372 00:25:02.372 Latency(us) 00:25:02.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.372 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.372 Verification LBA range: start 0x0 length 0x4000 00:25:02.372 Nvme0n1 : 27.55 10159.35 39.68 0.00 0.00 12577.24 316.99 3019898.88 00:25:02.372 =================================================================================================================== 00:25:02.372 Total : 10159.35 39.68 0.00 0.00 12577.24 316.99 3019898.88 00:25:02.372 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.631 rmmod nvme_tcp 00:25:02.631 rmmod nvme_fabrics 00:25:02.631 rmmod nvme_keyring 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1825674 ']' 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1825674 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1825674 ']' 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1825674 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1825674 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1825674' 00:25:02.631 killing process with pid 1825674 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1825674 00:25:02.631 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1825674 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.890 12:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.450 12:59:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.450 00:25:05.450 real 0m40.129s 00:25:05.450 user 1m48.271s 00:25:05.450 sys 0m10.839s 00:25:05.450 12:59:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.450 12:59:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:05.451 ************************************ 00:25:05.451 END TEST nvmf_host_multipath_status 00:25:05.451 ************************************ 00:25:05.451 12:59:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:05.451 12:59:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:05.451 12:59:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.451 12:59:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.451 12:59:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:05.451 ************************************ 00:25:05.451 START TEST nvmf_discovery_remove_ifc 00:25:05.451 ************************************ 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:05.451 * Looking for test storage... 00:25:05.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.451 12:59:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.451 12:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:10.806 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:10.806 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:10.806 Found net devices under 0000:86:00.0: cvl_0_0 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:10.806 Found net devices under 0000:86:00.1: cvl_0_1 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:25:10.806 00:25:10.806 --- 10.0.0.2 ping statistics --- 00:25:10.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.806 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:25:10.806 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:25:10.806 00:25:10.806 --- 10.0.0.1 ping statistics --- 00:25:10.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.806 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.807 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1834463 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1834463 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1834463 ']' 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.066 12:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.066 [2024-07-15 12:59:41.828575] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:11.066 [2024-07-15 12:59:41.828617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.066 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.066 [2024-07-15 12:59:41.896954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.066 [2024-07-15 12:59:41.975530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.066 [2024-07-15 12:59:41.975562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.066 [2024-07-15 12:59:41.975569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.066 [2024-07-15 12:59:41.975575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.066 [2024-07-15 12:59:41.975580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.066 [2024-07-15 12:59:41.975595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.003 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.004 [2024-07-15 12:59:42.681700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.004 [2024-07-15 12:59:42.689819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:12.004 null0 00:25:12.004 [2024-07-15 12:59:42.721838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1834644 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1834644 /tmp/host.sock 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1834644 ']' 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:12.004 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.004 12:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.004 [2024-07-15 12:59:42.787277] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:12.004 [2024-07-15 12:59:42.787320] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834644 ] 00:25:12.004 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.004 [2024-07-15 12:59:42.853553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.004 [2024-07-15 12:59:42.934092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.941 12:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.878 [2024-07-15 12:59:44.740389] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:13.878 [2024-07-15 12:59:44.740409] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:13.878 [2024-07-15 12:59:44.740422] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:13.878 [2024-07-15 12:59:44.828698] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:14.138 [2024-07-15 12:59:44.891552] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:14.138 [2024-07-15 12:59:44.891597] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:14.138 [2024-07-15 12:59:44.891617] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:14.138 [2024-07-15 12:59:44.891631] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:14.138 [2024-07-15 12:59:44.891649] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.138 [2024-07-15 12:59:44.898807] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b92e30 was disconnected and freed. delete nvme_qpair. 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:14.138 12:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:14.138 12:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:15.517 12:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:16.454 12:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:17.395 12:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.331 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.590 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.590 12:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.527 [2024-07-15 12:59:50.332914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:19.527 [2024-07-15 12:59:50.332953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.527 [2024-07-15 12:59:50.332964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.527 [2024-07-15 12:59:50.332972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.527 [2024-07-15 12:59:50.332979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.527 [2024-07-15 12:59:50.332987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.527 [2024-07-15 12:59:50.332994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.527 [2024-07-15 12:59:50.333000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.527 [2024-07-15 12:59:50.333007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.527 [2024-07-15 12:59:50.333015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.527 [2024-07-15 12:59:50.333021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.527 [2024-07-15 12:59:50.333028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59690 is same with the state(5) to be set 00:25:19.527 [2024-07-15 12:59:50.342936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59690 (9): Bad file descriptor 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.527 12:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.527 [2024-07-15 12:59:50.352976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.463 [2024-07-15 12:59:51.394286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:20.463 [2024-07-15 12:59:51.394360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b59690 with addr=10.0.0.2, port=4420 00:25:20.463 [2024-07-15 12:59:51.394390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59690 is same with the state(5) to be set 00:25:20.463 [2024-07-15 12:59:51.394440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b59690 (9): Bad file descriptor 00:25:20.463 [2024-07-15 12:59:51.395380] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.463 [2024-07-15 12:59:51.395429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.463 [2024-07-15 12:59:51.395451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.463 [2024-07-15 12:59:51.395482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.463 [2024-07-15 12:59:51.395519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.463 [2024-07-15 12:59:51.395542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.463 12:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.841 [2024-07-15 12:59:52.398040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.841 [2024-07-15 12:59:52.398065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.841 [2024-07-15 12:59:52.398072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.841 [2024-07-15 12:59:52.398080] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:21.841 [2024-07-15 12:59:52.398091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.841 [2024-07-15 12:59:52.398109] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:21.841 [2024-07-15 12:59:52.398128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.841 [2024-07-15 12:59:52.398138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.841 [2024-07-15 12:59:52.398147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.841 [2024-07-15 12:59:52.398154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.841 [2024-07-15 12:59:52.398161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.841 [2024-07-15 12:59:52.398169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.841 [2024-07-15 12:59:52.398177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.841 [2024-07-15 12:59:52.398184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.841 [2024-07-15 12:59:52.398191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.841 [2024-07-15 12:59:52.398198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.841 [2024-07-15 12:59:52.398204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:21.841 [2024-07-15 12:59:52.398713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b58a80 (9): Bad file descriptor 00:25:21.841 [2024-07-15 12:59:52.399723] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:21.841 [2024-07-15 12:59:52.399735] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:21.841 12:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:22.778 12:59:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.716 [2024-07-15 12:59:54.455379] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:23.716 [2024-07-15 12:59:54.455395] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:23.716 [2024-07-15 12:59:54.455409] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.716 [2024-07-15 12:59:54.543695] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.716 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.975 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:23.975 12:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.975 [2024-07-15 12:59:54.729317] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:23.975 [2024-07-15 12:59:54.729352] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:23.975 [2024-07-15 12:59:54.729370] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:23.975 [2024-07-15 12:59:54.729383] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:23.975 [2024-07-15 12:59:54.729390] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:23.975 [2024-07-15 12:59:54.734175] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b6f8d0 was disconnected and freed. delete nvme_qpair. 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1834644 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1834644 ']' 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1834644 00:25:24.914 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1834644 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1834644' 00:25:24.915 killing process with pid 1834644 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1834644 00:25:24.915 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1834644 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.174 12:59:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.174 rmmod nvme_tcp 00:25:25.174 rmmod nvme_fabrics 00:25:25.174 rmmod nvme_keyring 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1834463 ']' 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1834463 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1834463 ']' 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1834463 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1834463 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1834463' 00:25:25.174 killing process with pid 1834463 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1834463 00:25:25.174 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1834463 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.435 12:59:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.996 12:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:27.996 00:25:27.996 real 0m22.424s 00:25:27.996 user 0m28.663s 00:25:27.996 sys 0m5.783s 00:25:27.996 12:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.996 12:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.996 ************************************ 00:25:27.996 END TEST nvmf_discovery_remove_ifc 00:25:27.996 ************************************ 00:25:27.996 12:59:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:27.996 12:59:58 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:27.996 12:59:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:27.996 12:59:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.996 12:59:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.996 ************************************ 00:25:27.996 START TEST nvmf_identify_kernel_target 00:25:27.996 ************************************ 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:27.996 * Looking for test storage... 00:25:27.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:27.996 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.997 12:59:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:33.265 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.265 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:33.266 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:33.266 Found net devices under 0000:86:00.0: cvl_0_0 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:33.266 Found net devices under 0000:86:00.1: cvl_0_1 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.266 13:00:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:25:33.266 00:25:33.266 --- 10.0.0.2 ping statistics --- 00:25:33.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.266 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:25:33.266 00:25:33.266 --- 10.0.0.1 ping statistics --- 00:25:33.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.266 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.266 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.528 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:33.529 13:00:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:36.064 Waiting for block devices as requested 00:25:36.064 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:36.324 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:36.324 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:36.324 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:36.583 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:36.583 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:36.583 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:36.583 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:36.843 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:36.843 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:36.843 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:37.102 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:37.102 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:37.102 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:37.361 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:37.361 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:37.361 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:37.361 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:37.621 No valid GPT data, bailing 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:37.621 00:25:37.621 Discovery Log Number of Records 2, Generation counter 2 00:25:37.621 =====Discovery Log Entry 0====== 00:25:37.621 trtype: tcp 00:25:37.621 adrfam: ipv4 00:25:37.621 subtype: current discovery subsystem 00:25:37.621 treq: not specified, sq flow control disable supported 00:25:37.621 portid: 1 00:25:37.621 trsvcid: 4420 00:25:37.621 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:37.621 traddr: 10.0.0.1 00:25:37.621 eflags: none 00:25:37.621 sectype: none 00:25:37.621 =====Discovery Log Entry 1====== 00:25:37.621 trtype: tcp 00:25:37.621 adrfam: ipv4 00:25:37.621 subtype: nvme subsystem 00:25:37.621 treq: not specified, sq flow control disable supported 00:25:37.621 portid: 1 00:25:37.621 trsvcid: 4420 00:25:37.621 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:37.621 traddr: 10.0.0.1 00:25:37.621 eflags: none 00:25:37.621 sectype: none 00:25:37.621 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:37.621 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:37.621 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.621 ===================================================== 00:25:37.621 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:37.621 ===================================================== 00:25:37.621 Controller Capabilities/Features 00:25:37.621 ================================ 00:25:37.621 Vendor ID: 0000 00:25:37.621 Subsystem Vendor ID: 0000 00:25:37.621 Serial Number: ccd054bda56cdf6175ca 00:25:37.621 Model Number: Linux 00:25:37.621 Firmware Version: 6.7.0-68 00:25:37.621 Recommended Arb Burst: 0 00:25:37.621 IEEE OUI Identifier: 00 00 00 00:25:37.621 Multi-path I/O 00:25:37.621 May have multiple subsystem ports: No 00:25:37.621 May have multiple controllers: No 00:25:37.621 Associated with SR-IOV VF: No 00:25:37.621 Max Data Transfer Size: Unlimited 00:25:37.621 Max Number of Namespaces: 0 00:25:37.621 Max Number of I/O Queues: 1024 00:25:37.621 NVMe Specification Version (VS): 1.3 00:25:37.621 NVMe Specification Version (Identify): 1.3 00:25:37.621 Maximum Queue Entries: 1024 00:25:37.621 Contiguous Queues Required: No 00:25:37.621 Arbitration Mechanisms Supported 00:25:37.621 Weighted Round Robin: Not Supported 00:25:37.621 Vendor Specific: Not Supported 00:25:37.621 Reset Timeout: 7500 ms 00:25:37.621 Doorbell Stride: 4 bytes 00:25:37.621 NVM Subsystem Reset: Not Supported 00:25:37.621 Command Sets Supported 00:25:37.621 NVM Command Set: Supported 00:25:37.621 Boot Partition: Not Supported 00:25:37.621 Memory Page Size Minimum: 4096 bytes 00:25:37.621 Memory Page Size Maximum: 4096 bytes 00:25:37.621 Persistent Memory Region: Not Supported 00:25:37.621 Optional Asynchronous Events Supported 00:25:37.621 Namespace Attribute Notices: Not Supported 00:25:37.621 Firmware Activation Notices: Not Supported 00:25:37.621 ANA Change Notices: Not Supported 00:25:37.621 PLE Aggregate Log Change Notices: Not Supported 00:25:37.621 LBA Status Info Alert Notices: Not Supported 00:25:37.621 EGE Aggregate Log Change Notices: Not Supported 00:25:37.621 Normal NVM Subsystem Shutdown event: Not Supported 00:25:37.621 Zone Descriptor Change Notices: Not Supported 00:25:37.621 Discovery Log Change Notices: Supported 00:25:37.621 Controller Attributes 00:25:37.621 128-bit Host Identifier: Not Supported 00:25:37.621 Non-Operational Permissive Mode: Not Supported 00:25:37.621 NVM Sets: Not Supported 00:25:37.621 Read Recovery Levels: Not Supported 00:25:37.621 Endurance Groups: Not Supported 00:25:37.621 Predictable Latency Mode: Not Supported 00:25:37.621 Traffic Based Keep ALive: Not Supported 00:25:37.621 Namespace Granularity: Not Supported 00:25:37.621 SQ Associations: Not Supported 00:25:37.621 UUID List: Not Supported 00:25:37.622 Multi-Domain Subsystem: Not Supported 00:25:37.622 Fixed Capacity Management: Not Supported 00:25:37.622 Variable Capacity Management: Not Supported 00:25:37.622 Delete Endurance Group: Not Supported 00:25:37.622 Delete NVM Set: Not Supported 00:25:37.622 Extended LBA Formats Supported: Not Supported 00:25:37.622 Flexible Data Placement Supported: Not Supported 00:25:37.622 00:25:37.622 Controller Memory Buffer Support 00:25:37.622 ================================ 00:25:37.622 Supported: No 00:25:37.622 00:25:37.622 Persistent Memory Region Support 00:25:37.622 ================================ 00:25:37.622 Supported: No 00:25:37.622 00:25:37.622 Admin Command Set Attributes 00:25:37.622 ============================ 00:25:37.622 Security Send/Receive: Not Supported 00:25:37.622 Format NVM: Not Supported 00:25:37.622 Firmware Activate/Download: Not Supported 00:25:37.622 Namespace Management: Not Supported 00:25:37.622 Device Self-Test: Not Supported 00:25:37.622 Directives: Not Supported 00:25:37.622 NVMe-MI: Not Supported 00:25:37.622 Virtualization Management: Not Supported 00:25:37.622 Doorbell Buffer Config: Not Supported 00:25:37.622 Get LBA Status Capability: Not Supported 00:25:37.622 Command & Feature Lockdown Capability: Not Supported 00:25:37.622 Abort Command Limit: 1 00:25:37.622 Async Event Request Limit: 1 00:25:37.622 Number of Firmware Slots: N/A 00:25:37.622 Firmware Slot 1 Read-Only: N/A 00:25:37.622 Firmware Activation Without Reset: N/A 00:25:37.622 Multiple Update Detection Support: N/A 00:25:37.622 Firmware Update Granularity: No Information Provided 00:25:37.622 Per-Namespace SMART Log: No 00:25:37.622 Asymmetric Namespace Access Log Page: Not Supported 00:25:37.622 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:37.622 Command Effects Log Page: Not Supported 00:25:37.622 Get Log Page Extended Data: Supported 00:25:37.622 Telemetry Log Pages: Not Supported 00:25:37.622 Persistent Event Log Pages: Not Supported 00:25:37.622 Supported Log Pages Log Page: May Support 00:25:37.622 Commands Supported & Effects Log Page: Not Supported 00:25:37.622 Feature Identifiers & Effects Log Page:May Support 00:25:37.622 NVMe-MI Commands & Effects Log Page: May Support 00:25:37.622 Data Area 4 for Telemetry Log: Not Supported 00:25:37.622 Error Log Page Entries Supported: 1 00:25:37.622 Keep Alive: Not Supported 00:25:37.622 00:25:37.622 NVM Command Set Attributes 00:25:37.622 ========================== 00:25:37.622 Submission Queue Entry Size 00:25:37.622 Max: 1 00:25:37.622 Min: 1 00:25:37.622 Completion Queue Entry Size 00:25:37.622 Max: 1 00:25:37.622 Min: 1 00:25:37.622 Number of Namespaces: 0 00:25:37.622 Compare Command: Not Supported 00:25:37.622 Write Uncorrectable Command: Not Supported 00:25:37.622 Dataset Management Command: Not Supported 00:25:37.622 Write Zeroes Command: Not Supported 00:25:37.622 Set Features Save Field: Not Supported 00:25:37.622 Reservations: Not Supported 00:25:37.622 Timestamp: Not Supported 00:25:37.622 Copy: Not Supported 00:25:37.622 Volatile Write Cache: Not Present 00:25:37.622 Atomic Write Unit (Normal): 1 00:25:37.622 Atomic Write Unit (PFail): 1 00:25:37.622 Atomic Compare & Write Unit: 1 00:25:37.622 Fused Compare & Write: Not Supported 00:25:37.622 Scatter-Gather List 00:25:37.622 SGL Command Set: Supported 00:25:37.622 SGL Keyed: Not Supported 00:25:37.622 SGL Bit Bucket Descriptor: Not Supported 00:25:37.622 SGL Metadata Pointer: Not Supported 00:25:37.622 Oversized SGL: Not Supported 00:25:37.622 SGL Metadata Address: Not Supported 00:25:37.622 SGL Offset: Supported 00:25:37.622 Transport SGL Data Block: Not Supported 00:25:37.622 Replay Protected Memory Block: Not Supported 00:25:37.622 00:25:37.622 Firmware Slot Information 00:25:37.622 ========================= 00:25:37.622 Active slot: 0 00:25:37.622 00:25:37.622 00:25:37.622 Error Log 00:25:37.622 ========= 00:25:37.622 00:25:37.622 Active Namespaces 00:25:37.622 ================= 00:25:37.622 Discovery Log Page 00:25:37.622 ================== 00:25:37.622 Generation Counter: 2 00:25:37.622 Number of Records: 2 00:25:37.622 Record Format: 0 00:25:37.622 00:25:37.622 Discovery Log Entry 0 00:25:37.622 ---------------------- 00:25:37.622 Transport Type: 3 (TCP) 00:25:37.622 Address Family: 1 (IPv4) 00:25:37.622 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:37.622 Entry Flags: 00:25:37.622 Duplicate Returned Information: 0 00:25:37.622 Explicit Persistent Connection Support for Discovery: 0 00:25:37.622 Transport Requirements: 00:25:37.622 Secure Channel: Not Specified 00:25:37.622 Port ID: 1 (0x0001) 00:25:37.622 Controller ID: 65535 (0xffff) 00:25:37.622 Admin Max SQ Size: 32 00:25:37.622 Transport Service Identifier: 4420 00:25:37.622 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:37.622 Transport Address: 10.0.0.1 00:25:37.622 Discovery Log Entry 1 00:25:37.622 ---------------------- 00:25:37.622 Transport Type: 3 (TCP) 00:25:37.622 Address Family: 1 (IPv4) 00:25:37.622 Subsystem Type: 2 (NVM Subsystem) 00:25:37.622 Entry Flags: 00:25:37.622 Duplicate Returned Information: 0 00:25:37.622 Explicit Persistent Connection Support for Discovery: 0 00:25:37.622 Transport Requirements: 00:25:37.622 Secure Channel: Not Specified 00:25:37.622 Port ID: 1 (0x0001) 00:25:37.622 Controller ID: 65535 (0xffff) 00:25:37.622 Admin Max SQ Size: 32 00:25:37.622 Transport Service Identifier: 4420 00:25:37.622 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:37.622 Transport Address: 10.0.0.1 00:25:37.622 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:37.622 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.880 get_feature(0x01) failed 00:25:37.880 get_feature(0x02) failed 00:25:37.880 get_feature(0x04) failed 00:25:37.880 ===================================================== 00:25:37.880 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:37.880 ===================================================== 00:25:37.880 Controller Capabilities/Features 00:25:37.880 ================================ 00:25:37.880 Vendor ID: 0000 00:25:37.880 Subsystem Vendor ID: 0000 00:25:37.880 Serial Number: 398b664ffa2fd791cb32 00:25:37.880 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:37.880 Firmware Version: 6.7.0-68 00:25:37.880 Recommended Arb Burst: 6 00:25:37.880 IEEE OUI Identifier: 00 00 00 00:25:37.880 Multi-path I/O 00:25:37.880 May have multiple subsystem ports: Yes 00:25:37.880 May have multiple controllers: Yes 00:25:37.880 Associated with SR-IOV VF: No 00:25:37.880 Max Data Transfer Size: Unlimited 00:25:37.880 Max Number of Namespaces: 1024 00:25:37.880 Max Number of I/O Queues: 128 00:25:37.880 NVMe Specification Version (VS): 1.3 00:25:37.880 NVMe Specification Version (Identify): 1.3 00:25:37.880 Maximum Queue Entries: 1024 00:25:37.880 Contiguous Queues Required: No 00:25:37.880 Arbitration Mechanisms Supported 00:25:37.880 Weighted Round Robin: Not Supported 00:25:37.880 Vendor Specific: Not Supported 00:25:37.880 Reset Timeout: 7500 ms 00:25:37.880 Doorbell Stride: 4 bytes 00:25:37.880 NVM Subsystem Reset: Not Supported 00:25:37.880 Command Sets Supported 00:25:37.880 NVM Command Set: Supported 00:25:37.880 Boot Partition: Not Supported 00:25:37.880 Memory Page Size Minimum: 4096 bytes 00:25:37.880 Memory Page Size Maximum: 4096 bytes 00:25:37.880 Persistent Memory Region: Not Supported 00:25:37.880 Optional Asynchronous Events Supported 00:25:37.880 Namespace Attribute Notices: Supported 00:25:37.880 Firmware Activation Notices: Not Supported 00:25:37.880 ANA Change Notices: Supported 00:25:37.880 PLE Aggregate Log Change Notices: Not Supported 00:25:37.880 LBA Status Info Alert Notices: Not Supported 00:25:37.880 EGE Aggregate Log Change Notices: Not Supported 00:25:37.880 Normal NVM Subsystem Shutdown event: Not Supported 00:25:37.880 Zone Descriptor Change Notices: Not Supported 00:25:37.880 Discovery Log Change Notices: Not Supported 00:25:37.880 Controller Attributes 00:25:37.880 128-bit Host Identifier: Supported 00:25:37.880 Non-Operational Permissive Mode: Not Supported 00:25:37.880 NVM Sets: Not Supported 00:25:37.880 Read Recovery Levels: Not Supported 00:25:37.880 Endurance Groups: Not Supported 00:25:37.880 Predictable Latency Mode: Not Supported 00:25:37.880 Traffic Based Keep ALive: Supported 00:25:37.880 Namespace Granularity: Not Supported 00:25:37.880 SQ Associations: Not Supported 00:25:37.880 UUID List: Not Supported 00:25:37.880 Multi-Domain Subsystem: Not Supported 00:25:37.880 Fixed Capacity Management: Not Supported 00:25:37.880 Variable Capacity Management: Not Supported 00:25:37.880 Delete Endurance Group: Not Supported 00:25:37.880 Delete NVM Set: Not Supported 00:25:37.880 Extended LBA Formats Supported: Not Supported 00:25:37.880 Flexible Data Placement Supported: Not Supported 00:25:37.880 00:25:37.880 Controller Memory Buffer Support 00:25:37.880 ================================ 00:25:37.880 Supported: No 00:25:37.880 00:25:37.880 Persistent Memory Region Support 00:25:37.880 ================================ 00:25:37.880 Supported: No 00:25:37.880 00:25:37.880 Admin Command Set Attributes 00:25:37.880 ============================ 00:25:37.880 Security Send/Receive: Not Supported 00:25:37.880 Format NVM: Not Supported 00:25:37.880 Firmware Activate/Download: Not Supported 00:25:37.880 Namespace Management: Not Supported 00:25:37.880 Device Self-Test: Not Supported 00:25:37.881 Directives: Not Supported 00:25:37.881 NVMe-MI: Not Supported 00:25:37.881 Virtualization Management: Not Supported 00:25:37.881 Doorbell Buffer Config: Not Supported 00:25:37.881 Get LBA Status Capability: Not Supported 00:25:37.881 Command & Feature Lockdown Capability: Not Supported 00:25:37.881 Abort Command Limit: 4 00:25:37.881 Async Event Request Limit: 4 00:25:37.881 Number of Firmware Slots: N/A 00:25:37.881 Firmware Slot 1 Read-Only: N/A 00:25:37.881 Firmware Activation Without Reset: N/A 00:25:37.881 Multiple Update Detection Support: N/A 00:25:37.881 Firmware Update Granularity: No Information Provided 00:25:37.881 Per-Namespace SMART Log: Yes 00:25:37.881 Asymmetric Namespace Access Log Page: Supported 00:25:37.881 ANA Transition Time : 10 sec 00:25:37.881 00:25:37.881 Asymmetric Namespace Access Capabilities 00:25:37.881 ANA Optimized State : Supported 00:25:37.881 ANA Non-Optimized State : Supported 00:25:37.881 ANA Inaccessible State : Supported 00:25:37.881 ANA Persistent Loss State : Supported 00:25:37.881 ANA Change State : Supported 00:25:37.881 ANAGRPID is not changed : No 00:25:37.881 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:37.881 00:25:37.881 ANA Group Identifier Maximum : 128 00:25:37.881 Number of ANA Group Identifiers : 128 00:25:37.881 Max Number of Allowed Namespaces : 1024 00:25:37.881 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:37.881 Command Effects Log Page: Supported 00:25:37.881 Get Log Page Extended Data: Supported 00:25:37.881 Telemetry Log Pages: Not Supported 00:25:37.881 Persistent Event Log Pages: Not Supported 00:25:37.881 Supported Log Pages Log Page: May Support 00:25:37.881 Commands Supported & Effects Log Page: Not Supported 00:25:37.881 Feature Identifiers & Effects Log Page:May Support 00:25:37.881 NVMe-MI Commands & Effects Log Page: May Support 00:25:37.881 Data Area 4 for Telemetry Log: Not Supported 00:25:37.881 Error Log Page Entries Supported: 128 00:25:37.881 Keep Alive: Supported 00:25:37.881 Keep Alive Granularity: 1000 ms 00:25:37.881 00:25:37.881 NVM Command Set Attributes 00:25:37.881 ========================== 00:25:37.881 Submission Queue Entry Size 00:25:37.881 Max: 64 00:25:37.881 Min: 64 00:25:37.881 Completion Queue Entry Size 00:25:37.881 Max: 16 00:25:37.881 Min: 16 00:25:37.881 Number of Namespaces: 1024 00:25:37.881 Compare Command: Not Supported 00:25:37.881 Write Uncorrectable Command: Not Supported 00:25:37.881 Dataset Management Command: Supported 00:25:37.881 Write Zeroes Command: Supported 00:25:37.881 Set Features Save Field: Not Supported 00:25:37.881 Reservations: Not Supported 00:25:37.881 Timestamp: Not Supported 00:25:37.881 Copy: Not Supported 00:25:37.881 Volatile Write Cache: Present 00:25:37.881 Atomic Write Unit (Normal): 1 00:25:37.881 Atomic Write Unit (PFail): 1 00:25:37.881 Atomic Compare & Write Unit: 1 00:25:37.881 Fused Compare & Write: Not Supported 00:25:37.881 Scatter-Gather List 00:25:37.881 SGL Command Set: Supported 00:25:37.881 SGL Keyed: Not Supported 00:25:37.881 SGL Bit Bucket Descriptor: Not Supported 00:25:37.881 SGL Metadata Pointer: Not Supported 00:25:37.881 Oversized SGL: Not Supported 00:25:37.881 SGL Metadata Address: Not Supported 00:25:37.881 SGL Offset: Supported 00:25:37.881 Transport SGL Data Block: Not Supported 00:25:37.881 Replay Protected Memory Block: Not Supported 00:25:37.881 00:25:37.881 Firmware Slot Information 00:25:37.881 ========================= 00:25:37.881 Active slot: 0 00:25:37.881 00:25:37.881 Asymmetric Namespace Access 00:25:37.881 =========================== 00:25:37.881 Change Count : 0 00:25:37.881 Number of ANA Group Descriptors : 1 00:25:37.881 ANA Group Descriptor : 0 00:25:37.881 ANA Group ID : 1 00:25:37.881 Number of NSID Values : 1 00:25:37.881 Change Count : 0 00:25:37.881 ANA State : 1 00:25:37.881 Namespace Identifier : 1 00:25:37.881 00:25:37.881 Commands Supported and Effects 00:25:37.881 ============================== 00:25:37.881 Admin Commands 00:25:37.881 -------------- 00:25:37.881 Get Log Page (02h): Supported 00:25:37.881 Identify (06h): Supported 00:25:37.881 Abort (08h): Supported 00:25:37.881 Set Features (09h): Supported 00:25:37.881 Get Features (0Ah): Supported 00:25:37.881 Asynchronous Event Request (0Ch): Supported 00:25:37.881 Keep Alive (18h): Supported 00:25:37.881 I/O Commands 00:25:37.881 ------------ 00:25:37.881 Flush (00h): Supported 00:25:37.881 Write (01h): Supported LBA-Change 00:25:37.881 Read (02h): Supported 00:25:37.881 Write Zeroes (08h): Supported LBA-Change 00:25:37.881 Dataset Management (09h): Supported 00:25:37.881 00:25:37.881 Error Log 00:25:37.881 ========= 00:25:37.881 Entry: 0 00:25:37.881 Error Count: 0x3 00:25:37.881 Submission Queue Id: 0x0 00:25:37.881 Command Id: 0x5 00:25:37.881 Phase Bit: 0 00:25:37.881 Status Code: 0x2 00:25:37.881 Status Code Type: 0x0 00:25:37.881 Do Not Retry: 1 00:25:37.881 Error Location: 0x28 00:25:37.881 LBA: 0x0 00:25:37.881 Namespace: 0x0 00:25:37.881 Vendor Log Page: 0x0 00:25:37.881 ----------- 00:25:37.881 Entry: 1 00:25:37.881 Error Count: 0x2 00:25:37.881 Submission Queue Id: 0x0 00:25:37.881 Command Id: 0x5 00:25:37.881 Phase Bit: 0 00:25:37.881 Status Code: 0x2 00:25:37.881 Status Code Type: 0x0 00:25:37.881 Do Not Retry: 1 00:25:37.881 Error Location: 0x28 00:25:37.881 LBA: 0x0 00:25:37.881 Namespace: 0x0 00:25:37.881 Vendor Log Page: 0x0 00:25:37.881 ----------- 00:25:37.881 Entry: 2 00:25:37.881 Error Count: 0x1 00:25:37.881 Submission Queue Id: 0x0 00:25:37.881 Command Id: 0x4 00:25:37.881 Phase Bit: 0 00:25:37.881 Status Code: 0x2 00:25:37.881 Status Code Type: 0x0 00:25:37.881 Do Not Retry: 1 00:25:37.881 Error Location: 0x28 00:25:37.881 LBA: 0x0 00:25:37.881 Namespace: 0x0 00:25:37.881 Vendor Log Page: 0x0 00:25:37.881 00:25:37.881 Number of Queues 00:25:37.881 ================ 00:25:37.881 Number of I/O Submission Queues: 128 00:25:37.881 Number of I/O Completion Queues: 128 00:25:37.881 00:25:37.881 ZNS Specific Controller Data 00:25:37.881 ============================ 00:25:37.881 Zone Append Size Limit: 0 00:25:37.881 00:25:37.881 00:25:37.881 Active Namespaces 00:25:37.881 ================= 00:25:37.881 get_feature(0x05) failed 00:25:37.881 Namespace ID:1 00:25:37.881 Command Set Identifier: NVM (00h) 00:25:37.881 Deallocate: Supported 00:25:37.881 Deallocated/Unwritten Error: Not Supported 00:25:37.881 Deallocated Read Value: Unknown 00:25:37.881 Deallocate in Write Zeroes: Not Supported 00:25:37.881 Deallocated Guard Field: 0xFFFF 00:25:37.881 Flush: Supported 00:25:37.881 Reservation: Not Supported 00:25:37.881 Namespace Sharing Capabilities: Multiple Controllers 00:25:37.881 Size (in LBAs): 1953525168 (931GiB) 00:25:37.881 Capacity (in LBAs): 1953525168 (931GiB) 00:25:37.881 Utilization (in LBAs): 1953525168 (931GiB) 00:25:37.881 UUID: 084aad8e-a433-46e7-b86c-8c8d87222f3a 00:25:37.881 Thin Provisioning: Not Supported 00:25:37.881 Per-NS Atomic Units: Yes 00:25:37.881 Atomic Boundary Size (Normal): 0 00:25:37.881 Atomic Boundary Size (PFail): 0 00:25:37.881 Atomic Boundary Offset: 0 00:25:37.881 NGUID/EUI64 Never Reused: No 00:25:37.881 ANA group ID: 1 00:25:37.881 Namespace Write Protected: No 00:25:37.881 Number of LBA Formats: 1 00:25:37.881 Current LBA Format: LBA Format #00 00:25:37.881 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:37.881 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.881 rmmod nvme_tcp 00:25:37.881 rmmod nvme_fabrics 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.881 13:00:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.784 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:39.784 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:39.784 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:39.784 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:39.784 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:40.042 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:40.042 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:40.042 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:40.042 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:40.042 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:40.042 13:00:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:42.577 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:42.577 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:42.577 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:42.577 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:42.836 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:43.774 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:43.774 00:25:43.774 real 0m16.182s 00:25:43.774 user 0m4.030s 00:25:43.774 sys 0m8.473s 00:25:43.774 13:00:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:43.774 13:00:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.774 ************************************ 00:25:43.774 END TEST nvmf_identify_kernel_target 00:25:43.774 ************************************ 00:25:43.774 13:00:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:43.774 13:00:14 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:43.774 13:00:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:43.774 13:00:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.774 13:00:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:43.774 ************************************ 00:25:43.774 START TEST nvmf_auth_host 00:25:43.774 ************************************ 00:25:43.774 13:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:44.034 * Looking for test storage... 00:25:44.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.034 13:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:49.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:49.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.375 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:49.376 Found net devices under 0000:86:00.0: cvl_0_0 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:49.376 Found net devices under 0000:86:00.1: cvl_0_1 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.376 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:25:49.634 00:25:49.634 --- 10.0.0.2 ping statistics --- 00:25:49.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.634 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:25:49.634 00:25:49.634 --- 10.0.0.1 ping statistics --- 00:25:49.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.634 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1847105 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1847105 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1847105 ']' 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.634 13:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb2a76b317357c7bd456a8fb7b24d172 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.U3H 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb2a76b317357c7bd456a8fb7b24d172 0 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb2a76b317357c7bd456a8fb7b24d172 0 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb2a76b317357c7bd456a8fb7b24d172 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.U3H 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.U3H 00:25:50.600 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.U3H 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c06e36f7717ad2aeb5649ae6d116b485ce5607a11450b26cb3646ad7eacc8459 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5j3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c06e36f7717ad2aeb5649ae6d116b485ce5607a11450b26cb3646ad7eacc8459 3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c06e36f7717ad2aeb5649ae6d116b485ce5607a11450b26cb3646ad7eacc8459 3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c06e36f7717ad2aeb5649ae6d116b485ce5607a11450b26cb3646ad7eacc8459 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5j3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5j3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5j3 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b479338058179bb355fad5bfffa05ec083d45730fe9342f 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TJO 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b479338058179bb355fad5bfffa05ec083d45730fe9342f 0 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b479338058179bb355fad5bfffa05ec083d45730fe9342f 0 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b479338058179bb355fad5bfffa05ec083d45730fe9342f 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TJO 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TJO 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.TJO 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:50.601 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=face86a64210aec1211451f0812b95cd39c9ea27d605042b 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MAD 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key face86a64210aec1211451f0812b95cd39c9ea27d605042b 2 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 face86a64210aec1211451f0812b95cd39c9ea27d605042b 2 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=face86a64210aec1211451f0812b95cd39c9ea27d605042b 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MAD 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MAD 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MAD 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa3e35bc83907985f1effad5b6204e5c 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lyN 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa3e35bc83907985f1effad5b6204e5c 1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa3e35bc83907985f1effad5b6204e5c 1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa3e35bc83907985f1effad5b6204e5c 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lyN 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lyN 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.lyN 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87a12603e17629d170329341d2e4daff 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cLc 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87a12603e17629d170329341d2e4daff 1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87a12603e17629d170329341d2e4daff 1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87a12603e17629d170329341d2e4daff 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cLc 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cLc 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.cLc 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45eb09a8a2a2a9b0f1cfdbf074016d12c0d7fc00d0c9286a 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bOX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45eb09a8a2a2a9b0f1cfdbf074016d12c0d7fc00d0c9286a 2 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45eb09a8a2a2a9b0f1cfdbf074016d12c0d7fc00d0c9286a 2 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45eb09a8a2a2a9b0f1cfdbf074016d12c0d7fc00d0c9286a 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bOX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bOX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bOX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e750e38e5654af56c99f78661ce6934 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vkb 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e750e38e5654af56c99f78661ce6934 0 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e750e38e5654af56c99f78661ce6934 0 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e750e38e5654af56c99f78661ce6934 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:50.859 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vkb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vkb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vkb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0533afb42fc18a286806d06bcd7c18ba56b855f7a8650e5abcb10964cc739639 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tIb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0533afb42fc18a286806d06bcd7c18ba56b855f7a8650e5abcb10964cc739639 3 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0533afb42fc18a286806d06bcd7c18ba56b855f7a8650e5abcb10964cc739639 3 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0533afb42fc18a286806d06bcd7c18ba56b855f7a8650e5abcb10964cc739639 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tIb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tIb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tIb 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1847105 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1847105 ']' 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.117 13:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.U3H 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5j3 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5j3 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.TJO 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MAD ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MAD 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.lyN 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.cLc ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cLc 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bOX 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vkb ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vkb 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tIb 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:51.375 13:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:53.926 Waiting for block devices as requested 00:25:53.926 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:54.205 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:54.205 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:54.205 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:54.205 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:54.464 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:54.464 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:54.464 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:54.464 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:54.723 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:54.723 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:54.723 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:54.723 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:54.982 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:54.982 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:54.982 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:55.241 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:55.807 No valid GPT data, bailing 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:55.807 00:25:55.807 Discovery Log Number of Records 2, Generation counter 2 00:25:55.807 =====Discovery Log Entry 0====== 00:25:55.807 trtype: tcp 00:25:55.807 adrfam: ipv4 00:25:55.807 subtype: current discovery subsystem 00:25:55.807 treq: not specified, sq flow control disable supported 00:25:55.807 portid: 1 00:25:55.807 trsvcid: 4420 00:25:55.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:55.807 traddr: 10.0.0.1 00:25:55.807 eflags: none 00:25:55.807 sectype: none 00:25:55.807 =====Discovery Log Entry 1====== 00:25:55.807 trtype: tcp 00:25:55.807 adrfam: ipv4 00:25:55.807 subtype: nvme subsystem 00:25:55.807 treq: not specified, sq flow control disable supported 00:25:55.807 portid: 1 00:25:55.807 trsvcid: 4420 00:25:55.807 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:55.807 traddr: 10.0.0.1 00:25:55.807 eflags: none 00:25:55.807 sectype: none 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.807 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.066 nvme0n1 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.066 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.067 13:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.326 nvme0n1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.326 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.585 nvme0n1 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.585 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.586 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.845 nvme0n1 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:56.845 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.846 nvme0n1 00:25:56.846 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.105 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.105 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 13:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 nvme0n1 00:25:57.106 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.106 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.106 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.106 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.106 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.106 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.365 nvme0n1 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.365 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.624 nvme0n1 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.624 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 nvme0n1 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.883 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.142 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.143 13:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.143 nvme0n1 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.143 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.401 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.402 nvme0n1 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:25:58.402 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.661 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.921 nvme0n1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.921 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.179 nvme0n1 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.179 13:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.179 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.438 nvme0n1 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.438 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.697 nvme0n1 00:25:59.697 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.697 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.697 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.697 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.697 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.697 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.956 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.216 nvme0n1 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:00.216 13:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.216 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.478 nvme0n1 00:26:00.478 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.478 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.478 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.478 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.478 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.478 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.737 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.999 nvme0n1 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.999 13:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.568 nvme0n1 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.568 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.827 nvme0n1 00:26:01.827 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.827 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.827 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.827 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.827 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.086 13:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.345 nvme0n1 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.345 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.346 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.281 nvme0n1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.281 13:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.849 nvme0n1 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.849 13:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 nvme0n1 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.448 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.020 nvme0n1 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.020 13:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.954 nvme0n1 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.954 nvme0n1 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.954 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.955 13:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.214 nvme0n1 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.214 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.473 nvme0n1 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.473 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.731 nvme0n1 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.731 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.732 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.991 nvme0n1 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.991 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.251 nvme0n1 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.251 13:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.251 nvme0n1 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.251 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.510 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.511 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.770 nvme0n1 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:07.770 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.771 nvme0n1 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.771 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 nvme0n1 00:26:08.031 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.290 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.290 13:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.290 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.290 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.290 13:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.290 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.549 nvme0n1 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.549 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.550 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.808 nvme0n1 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.808 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.068 nvme0n1 00:26:09.068 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.068 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.068 13:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.068 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.068 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.068 13:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.068 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.068 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.068 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.068 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.325 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.584 nvme0n1 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.584 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.844 nvme0n1 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.844 13:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.411 nvme0n1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.411 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.685 nvme0n1 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.685 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.252 nvme0n1 00:26:11.252 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.252 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.252 13:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.252 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.252 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.252 13:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.252 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.509 nvme0n1 00:26:11.509 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.509 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.509 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.509 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.509 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.509 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.766 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.767 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 nvme0n1 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.023 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.024 13:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.589 nvme0n1 00:26:12.589 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.589 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.589 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.589 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.589 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.589 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.847 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.848 13:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.413 nvme0n1 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.413 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.414 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 nvme0n1 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.978 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.979 13:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.542 nvme0n1 00:26:14.542 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.542 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.542 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.542 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.542 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.801 13:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.368 nvme0n1 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.368 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.369 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.627 nvme0n1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.627 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.885 nvme0n1 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.885 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.886 nvme0n1 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.886 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.144 13:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.144 nvme0n1 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.144 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.145 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.403 nvme0n1 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.403 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.404 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.663 nvme0n1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.663 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.922 nvme0n1 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.922 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.923 13:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.182 nvme0n1 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.182 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.441 nvme0n1 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.441 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.442 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.701 nvme0n1 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.701 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.960 nvme0n1 00:26:17.960 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.960 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.960 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.960 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.960 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.960 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.254 13:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.544 nvme0n1 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:18.544 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.545 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.803 nvme0n1 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.803 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.804 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.804 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.804 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.063 nvme0n1 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.063 13:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.322 nvme0n1 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.322 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.910 nvme0n1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.910 13:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.169 nvme0n1 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.169 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.428 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.429 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.688 nvme0n1 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.688 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.256 nvme0n1 00:26:21.256 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.256 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.256 13:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.256 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.256 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.256 13:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.256 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.257 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.516 nvme0n1 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.516 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyYTc2YjMxNzM1N2M3YmQ0NTZhOGZiN2IyNGQxNzIqcqks: 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: ]] 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA2ZTM2Zjc3MTdhZDJhZWI1NjQ5YWU2ZDExNmI0ODVjZTU2MDdhMTE0NTBiMjZjYjM2NDZhZDdlYWNjODQ1OZg8abA=: 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.775 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.776 13:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 nvme0n1 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.344 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.345 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.345 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.345 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.915 nvme0n1 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWEzZTM1YmM4MzkwNzk4NWYxZWZmYWQ1YjYyMDRlNWOZQKjL: 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdhMTI2MDNlMTc2MjlkMTcwMzI5MzQxZDJlNGRhZmYIKL0w: 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.915 13:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 nvme0n1 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDVlYjA5YThhMmEyYTliMGYxY2ZkYmYwNzQwMTZkMTJjMGQ3ZmMwMGQwYzkyODZhpFBvIQ==: 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: ]] 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU3NTBlMzhlNTY1NGFmNTZjOTlmNzg2NjFjZTY5MzTfTWxH: 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.484 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.743 13:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.311 nvme0n1 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDUzM2FmYjQyZmMxOGEyODY4MDZkMDZiY2Q3YzE4YmE1NmI4NTVmN2E4NjUwZTVhYmNiMTA5NjRjYzczOTYzORWTd5c=: 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.311 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.312 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 nvme0n1 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI0NzkzMzgwNTgxNzliYjM1NWZhZDViZmZmYTA1ZWMwODNkNDU3MzBmZTkzNDJmhxJzkQ==: 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmFjZTg2YTY0MjEwYWVjMTIxMTQ1MWYwODEyYjk1Y2QzOWM5ZWEyN2Q2MDUwNDJi5FKZpw==: 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 request: 00:26:24.880 { 00:26:24.880 "name": "nvme0", 00:26:24.880 "trtype": "tcp", 00:26:24.880 "traddr": "10.0.0.1", 00:26:24.880 "adrfam": "ipv4", 00:26:24.880 "trsvcid": "4420", 00:26:24.880 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:24.880 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:24.880 "prchk_reftag": false, 00:26:24.880 "prchk_guard": false, 00:26:24.880 "hdgst": false, 00:26:24.880 "ddgst": false, 00:26:24.880 "method": "bdev_nvme_attach_controller", 00:26:24.880 "req_id": 1 00:26:24.880 } 00:26:24.880 Got JSON-RPC error response 00:26:24.880 response: 00:26:24.880 { 00:26:24.880 "code": -5, 00:26:24.880 "message": "Input/output error" 00:26:24.880 } 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.880 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.140 request: 00:26:25.140 { 00:26:25.140 "name": "nvme0", 00:26:25.140 "trtype": "tcp", 00:26:25.140 "traddr": "10.0.0.1", 00:26:25.140 "adrfam": "ipv4", 00:26:25.140 "trsvcid": "4420", 00:26:25.140 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:25.140 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:25.140 "prchk_reftag": false, 00:26:25.140 "prchk_guard": false, 00:26:25.140 "hdgst": false, 00:26:25.140 "ddgst": false, 00:26:25.140 "dhchap_key": "key2", 00:26:25.140 "method": "bdev_nvme_attach_controller", 00:26:25.140 "req_id": 1 00:26:25.140 } 00:26:25.140 Got JSON-RPC error response 00:26:25.140 response: 00:26:25.140 { 00:26:25.140 "code": -5, 00:26:25.140 "message": "Input/output error" 00:26:25.140 } 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.140 13:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.141 13:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.141 request: 00:26:25.141 { 00:26:25.141 "name": "nvme0", 00:26:25.141 "trtype": "tcp", 00:26:25.141 "traddr": "10.0.0.1", 00:26:25.141 "adrfam": "ipv4", 00:26:25.141 "trsvcid": "4420", 00:26:25.141 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:25.141 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:25.141 "prchk_reftag": false, 00:26:25.141 "prchk_guard": false, 00:26:25.141 "hdgst": false, 00:26:25.141 "ddgst": false, 00:26:25.141 "dhchap_key": "key1", 00:26:25.141 "dhchap_ctrlr_key": "ckey2", 00:26:25.141 "method": "bdev_nvme_attach_controller", 00:26:25.141 "req_id": 1 00:26:25.141 } 00:26:25.141 Got JSON-RPC error response 00:26:25.141 response: 00:26:25.141 { 00:26:25.141 "code": -5, 00:26:25.141 "message": "Input/output error" 00:26:25.141 } 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.141 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.141 rmmod nvme_tcp 00:26:25.141 rmmod nvme_fabrics 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1847105 ']' 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1847105 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1847105 ']' 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1847105 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1847105 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1847105' 00:26:25.400 killing process with pid 1847105 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1847105 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1847105 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.400 13:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:27.936 13:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:30.474 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:30.474 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:31.409 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:31.409 13:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.U3H /tmp/spdk.key-null.TJO /tmp/spdk.key-sha256.lyN /tmp/spdk.key-sha384.bOX /tmp/spdk.key-sha512.tIb /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:31.409 13:01:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:34.696 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:34.696 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:34.696 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:34.696 00:26:34.696 real 0m50.435s 00:26:34.696 user 0m45.102s 00:26:34.696 sys 0m12.304s 00:26:34.696 13:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.696 13:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.696 ************************************ 00:26:34.696 END TEST nvmf_auth_host 00:26:34.696 ************************************ 00:26:34.696 13:01:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:34.696 13:01:05 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:34.696 13:01:05 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:34.696 13:01:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:34.696 13:01:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.696 13:01:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.696 ************************************ 00:26:34.696 START TEST nvmf_digest 00:26:34.696 ************************************ 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:34.696 * Looking for test storage... 00:26:34.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.696 13:01:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.014 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:40.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:40.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:40.015 Found net devices under 0000:86:00.0: cvl_0_0 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:40.015 Found net devices under 0000:86:00.1: cvl_0_1 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:40.015 00:26:40.015 --- 10.0.0.2 ping statistics --- 00:26:40.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.015 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:26:40.015 00:26:40.015 --- 10.0.0.1 ping statistics --- 00:26:40.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.015 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.015 13:01:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.275 13:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:40.275 13:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:40.275 13:01:10 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:40.275 13:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:40.275 13:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.275 13:01:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.275 ************************************ 00:26:40.275 START TEST nvmf_digest_clean 00:26:40.275 ************************************ 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1860334 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1860334 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1860334 ']' 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.275 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.275 [2024-07-15 13:01:11.069398] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:40.275 [2024-07-15 13:01:11.069446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.275 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.275 [2024-07-15 13:01:11.142715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.275 [2024-07-15 13:01:11.221605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.275 [2024-07-15 13:01:11.221639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.275 [2024-07-15 13:01:11.221646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.275 [2024-07-15 13:01:11.221652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.275 [2024-07-15 13:01:11.221656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.275 [2024-07-15 13:01:11.221671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.212 13:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.212 null0 00:26:41.212 [2024-07-15 13:01:11.996058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.212 [2024-07-15 13:01:12.020220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.212 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.212 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:41.212 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:41.212 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:41.212 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1860517 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1860517 /var/tmp/bperf.sock 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1860517 ']' 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.213 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 [2024-07-15 13:01:12.069175] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:41.213 [2024-07-15 13:01:12.069216] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860517 ] 00:26:41.213 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.213 [2024-07-15 13:01:12.135904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.471 [2024-07-15 13:01:12.214976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.040 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.040 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:42.040 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:42.040 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:42.040 13:01:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:42.299 13:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.299 13:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.557 nvme0n1 00:26:42.557 13:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:42.557 13:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.815 Running I/O for 2 seconds... 00:26:44.745 00:26:44.745 Latency(us) 00:26:44.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.745 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:44.745 nvme0n1 : 2.00 25487.40 99.56 0.00 0.00 5017.36 2222.53 11511.54 00:26:44.745 =================================================================================================================== 00:26:44.745 Total : 25487.40 99.56 0.00 0.00 5017.36 2222.53 11511.54 00:26:44.745 0 00:26:44.745 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:44.745 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:44.745 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:44.745 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:44.745 | select(.opcode=="crc32c") 00:26:44.745 | "\(.module_name) \(.executed)"' 00:26:44.745 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1860517 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1860517 ']' 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1860517 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1860517 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1860517' 00:26:45.004 killing process with pid 1860517 00:26:45.004 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1860517 00:26:45.004 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.004 00:26:45.004 Latency(us) 00:26:45.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.004 =================================================================================================================== 00:26:45.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.005 13:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1860517 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1861214 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1861214 /var/tmp/bperf.sock 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1861214 ']' 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.264 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.264 [2024-07-15 13:01:16.053805] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:45.265 [2024-07-15 13:01:16.053854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861214 ] 00:26:45.265 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.265 Zero copy mechanism will not be used. 00:26:45.265 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.265 [2024-07-15 13:01:16.122896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.265 [2024-07-15 13:01:16.190607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.202 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.203 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:46.203 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:46.203 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.203 13:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:46.203 13:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.203 13:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.771 nvme0n1 00:26:46.771 13:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:46.771 13:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.771 Zero copy mechanism will not be used. 00:26:46.771 Running I/O for 2 seconds... 00:26:48.676 00:26:48.676 Latency(us) 00:26:48.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.676 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:48.676 nvme0n1 : 2.00 5018.21 627.28 0.00 0.00 3185.93 690.98 4957.94 00:26:48.676 =================================================================================================================== 00:26:48.676 Total : 5018.21 627.28 0.00 0.00 3185.93 690.98 4957.94 00:26:48.676 0 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:48.935 | select(.opcode=="crc32c") 00:26:48.935 | "\(.module_name) \(.executed)"' 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1861214 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1861214 ']' 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1861214 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861214 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861214' 00:26:48.935 killing process with pid 1861214 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1861214 00:26:48.935 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.935 00:26:48.935 Latency(us) 00:26:48.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.935 =================================================================================================================== 00:26:48.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.935 13:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1861214 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1861907 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1861907 /var/tmp/bperf.sock 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1861907 ']' 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:49.194 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.194 [2024-07-15 13:01:20.103509] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:49.194 [2024-07-15 13:01:20.103562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861907 ] 00:26:49.194 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.453 [2024-07-15 13:01:20.171182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.453 [2024-07-15 13:01:20.243282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.021 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.021 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:50.021 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:50.021 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.021 13:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.279 13:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.280 13:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.539 nvme0n1 00:26:50.539 13:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.539 13:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.797 Running I/O for 2 seconds... 00:26:52.701 00:26:52.701 Latency(us) 00:26:52.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.701 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:52.701 nvme0n1 : 2.00 28279.25 110.47 0.00 0.00 4520.38 1809.36 11454.55 00:26:52.701 =================================================================================================================== 00:26:52.701 Total : 28279.25 110.47 0.00 0.00 4520.38 1809.36 11454.55 00:26:52.701 0 00:26:52.701 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.701 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:52.701 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.701 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.701 | select(.opcode=="crc32c") 00:26:52.701 | "\(.module_name) \(.executed)"' 00:26:52.701 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1861907 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1861907 ']' 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1861907 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861907 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861907' 00:26:52.960 killing process with pid 1861907 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1861907 00:26:52.960 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.960 00:26:52.960 Latency(us) 00:26:52.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.960 =================================================================================================================== 00:26:52.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.960 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1861907 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1862476 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1862476 /var/tmp/bperf.sock 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1862476 ']' 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.219 13:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.219 [2024-07-15 13:01:23.976905] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:53.219 [2024-07-15 13:01:23.976952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862476 ] 00:26:53.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:53.219 Zero copy mechanism will not be used. 00:26:53.219 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.219 [2024-07-15 13:01:24.044669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.219 [2024-07-15 13:01:24.115156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.155 13:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.155 13:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:54.155 13:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:54.155 13:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:54.155 13:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:54.155 13:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.155 13:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.721 nvme0n1 00:26:54.721 13:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:54.721 13:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.721 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:54.721 Zero copy mechanism will not be used. 00:26:54.721 Running I/O for 2 seconds... 00:26:56.641 00:26:56.641 Latency(us) 00:26:56.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.641 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:56.641 nvme0n1 : 2.00 5594.81 699.35 0.00 0.00 2855.77 1837.86 12765.27 00:26:56.641 =================================================================================================================== 00:26:56.641 Total : 5594.81 699.35 0.00 0.00 2855.77 1837.86 12765.27 00:26:56.641 0 00:26:56.641 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:56.641 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:56.641 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:56.641 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:56.641 | select(.opcode=="crc32c") 00:26:56.641 | "\(.module_name) \(.executed)"' 00:26:56.641 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1862476 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1862476 ']' 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1862476 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1862476 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1862476' 00:26:56.899 killing process with pid 1862476 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1862476 00:26:56.899 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.899 00:26:56.899 Latency(us) 00:26:56.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.899 =================================================================================================================== 00:26:56.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.899 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1862476 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1860334 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1860334 ']' 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1860334 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1860334 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1860334' 00:26:57.170 killing process with pid 1860334 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1860334 00:26:57.170 13:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1860334 00:26:57.431 00:26:57.431 real 0m17.145s 00:26:57.431 user 0m33.030s 00:26:57.431 sys 0m4.388s 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:57.431 ************************************ 00:26:57.431 END TEST nvmf_digest_clean 00:26:57.431 ************************************ 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:57.431 ************************************ 00:26:57.431 START TEST nvmf_digest_error 00:26:57.431 ************************************ 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1863197 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1863197 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1863197 ']' 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.431 13:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.431 [2024-07-15 13:01:28.287422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:57.431 [2024-07-15 13:01:28.287463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.431 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.431 [2024-07-15 13:01:28.358162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.689 [2024-07-15 13:01:28.436115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.689 [2024-07-15 13:01:28.436150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.689 [2024-07-15 13:01:28.436157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.689 [2024-07-15 13:01:28.436163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.689 [2024-07-15 13:01:28.436168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.689 [2024-07-15 13:01:28.436200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.256 [2024-07-15 13:01:29.122209] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.256 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.256 null0 00:26:58.515 [2024-07-15 13:01:29.210452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.515 [2024-07-15 13:01:29.234620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1863360 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1863360 /var/tmp/bperf.sock 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1863360 ']' 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.515 13:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.515 [2024-07-15 13:01:29.284160] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:58.515 [2024-07-15 13:01:29.284200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863360 ] 00:26:58.515 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.515 [2024-07-15 13:01:29.351868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.515 [2024-07-15 13:01:29.430869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.451 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.711 nvme0n1 00:26:59.711 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:59.711 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.711 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:59.711 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.711 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:59.711 13:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.711 Running I/O for 2 seconds... 00:26:59.971 [2024-07-15 13:01:30.667521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.667558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.667568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.678390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.678415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.678424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.688016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.688037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.688046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.696422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.696442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.696450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.707078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.707098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.707107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.717085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.717104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.717112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.725521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.725541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.725549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.735866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.735885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.735893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.744502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.744521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.744529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.755568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.755588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.755596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.766519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.971 [2024-07-15 13:01:30.766539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.971 [2024-07-15 13:01:30.766547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.971 [2024-07-15 13:01:30.775024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.775044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.775052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.785653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.785674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.785682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.795148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.795168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.795176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.804358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.804378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.804386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.814361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.814380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.814388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.822746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.822765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.822773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.833946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.833965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.833976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.843879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.843899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.843906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.853281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.853300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.853308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.862104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.862124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.872256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.872276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.872284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.881006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.881026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.890752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.890772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.890780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.901090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.901109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.901117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.912766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.912786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.972 [2024-07-15 13:01:30.921370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:26:59.972 [2024-07-15 13:01:30.921390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.972 [2024-07-15 13:01:30.921401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.932170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.932191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.932199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.942088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.942108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.942116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.950676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.950696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.950705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.960376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.960397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.960405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.970166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.970187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.980981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.981002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.981010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:30.991501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:30.991521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:30.991529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.000780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.000800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.000812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.009698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.009718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.009726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.019747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.019767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.019775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.030854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.030874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.030882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.039400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.039420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.039428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.051250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.051278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.061297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.061315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.061323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.070300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.070320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.070327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.081117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.081137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.081146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.089977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.090003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.090011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.099545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.099565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.099573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.110040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.110059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.110067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.119105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.119125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.119132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.128807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.128827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.128835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.138187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.138207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.138215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.148582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.148602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.148610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.158273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.158294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.158301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.167258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.167278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.167285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.232 [2024-07-15 13:01:31.177548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.232 [2024-07-15 13:01:31.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.232 [2024-07-15 13:01:31.177576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.186510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.186531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.186539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.197421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.197441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.197449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.208655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.208675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.208683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.218463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.218482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.218489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.227076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.227096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.227104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.237108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.237127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.237135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.248523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.248543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.248551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.257190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.257209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.257221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.266428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.266448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.266455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.277110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.277130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.277139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.286893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.286913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.286921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.296998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.297018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.297026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.305364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.305384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.305392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.315584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.492 [2024-07-15 13:01:31.315605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.492 [2024-07-15 13:01:31.315612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.492 [2024-07-15 13:01:31.325237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.325258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.325266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.334109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.334129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.334137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.344157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.344177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.344185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.354121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.354141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.354149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.362090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.362109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.362117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.371573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.371592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.371600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.382035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.382055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.382063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.392184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.392202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.392210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.401957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.401976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.401984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.410469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.410489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.410496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.421591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.421613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.421625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.430603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.430623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.430631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.493 [2024-07-15 13:01:31.442042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.493 [2024-07-15 13:01:31.442062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.493 [2024-07-15 13:01:31.442070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.453505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.453526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.453534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.462283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.462302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.462310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.473446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.473465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.473473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.482631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.482651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.482658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.492745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.492764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.492772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.502643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.502662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.502670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.511259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.511281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.511288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.521969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.521989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.521996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.531766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.531786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.531794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.541381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.541399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.541407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.550293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.550312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.550319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.558834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.558854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.558861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.568245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.752 [2024-07-15 13:01:31.568264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.752 [2024-07-15 13:01:31.568272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.752 [2024-07-15 13:01:31.578428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.578447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.578455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.588550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.588569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.597023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.597042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.597050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.607832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.607851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.607859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.615940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.615960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.615967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.626529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.626557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.636824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.636843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.636851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.646029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.646048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.646056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.656611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.656630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.656638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.667071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.667089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.667097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.675082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.675103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.675115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.685548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.685569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.685579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.696190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.696210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.696217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.753 [2024-07-15 13:01:31.704826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:00.753 [2024-07-15 13:01:31.704846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.753 [2024-07-15 13:01:31.704854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.716488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.716510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.716518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.726514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.726534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.726542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.734947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.734966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.734974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.746625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.746645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.746652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.757335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.757354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.757362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.766038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.766062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.766070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.777236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.777263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.785764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.785783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.785791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.797092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.797111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.797119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.807346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.807365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.807374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.815999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.816018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.816026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.826422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.826442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.826450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.836180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.836199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.836207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.845076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.845095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.845106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.854571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.854591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.854598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.865130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.865149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.865157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.873272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.873292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.873299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.012 [2024-07-15 13:01:31.882983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.012 [2024-07-15 13:01:31.883002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.012 [2024-07-15 13:01:31.883010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.892400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.892420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.892428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.902265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.902284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.912420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.912440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.912447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.922396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.922415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.922423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.930958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.930980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.930988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.940728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.940747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.940755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.950316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.950336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.950344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.013 [2024-07-15 13:01:31.961342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.013 [2024-07-15 13:01:31.961362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.013 [2024-07-15 13:01:31.961369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:31.971368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:31.971389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.272 [2024-07-15 13:01:31.971396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:31.979714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:31.979733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.272 [2024-07-15 13:01:31.979742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:31.990096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:31.990115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.272 [2024-07-15 13:01:31.990123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:31.999082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:31.999102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.272 [2024-07-15 13:01:31.999110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:32.008748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:32.008766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.272 [2024-07-15 13:01:32.008775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:32.016796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:32.016814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.272 [2024-07-15 13:01:32.016822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.272 [2024-07-15 13:01:32.028445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.272 [2024-07-15 13:01:32.028465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.028473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.038435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.038454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.038461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.047321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.047341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.047348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.057847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.057866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.057874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.066397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.066416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.066424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.077446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.077465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.077473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.086620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.086639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.086647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.096016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.096035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.096046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.105231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.105250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.105257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.114611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.114631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.114639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.125465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.125485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.125493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.134124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.134143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.134152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.144306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.144325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.144332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.154356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.154375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.154383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.162842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.162862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.162869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.172410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.172429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.172438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.182510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.182532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.182541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.192526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.192546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.192554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.201236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.201255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.201262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.211564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.211584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.211591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.273 [2024-07-15 13:01:32.221511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.273 [2024-07-15 13:01:32.221530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.273 [2024-07-15 13:01:32.221538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.231384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.231405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.231413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.239915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.239935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.239943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.250339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.250359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.250367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.260417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.260436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.260444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.269684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.269703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.269711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.279294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.279314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.279322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.289044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.289063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.289071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.298638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.310025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.310044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.310052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.319678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.319697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.319705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.327454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.327473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.327482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.337585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.337606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.337613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.347763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.532 [2024-07-15 13:01:32.347788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.532 [2024-07-15 13:01:32.347796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.532 [2024-07-15 13:01:32.356952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.356980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.366990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.367011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.367019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.376002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.376022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.376029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.385339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.385360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.385367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.394822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.394844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.394852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.405305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.405324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.405332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.413787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.413808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.413816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.424706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.424726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.424734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.433531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.433559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.443581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.443601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.443608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.452653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.452673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.452680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.463288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.463316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.471925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.471945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.471953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.533 [2024-07-15 13:01:32.484118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.533 [2024-07-15 13:01:32.484138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.533 [2024-07-15 13:01:32.484146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.493732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.493753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.493761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.502070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.502090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.502098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.511872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.511892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.511904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.523250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.523270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.523278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.531089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.531108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.531116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.541641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.541660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.541668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.551091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.551110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.551118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.560679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.560699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.560706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.792 [2024-07-15 13:01:32.570879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.792 [2024-07-15 13:01:32.570898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.792 [2024-07-15 13:01:32.570906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.579526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.579547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.579555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.589168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.589187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.589195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.599263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.599286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.599294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.608680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.608700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.617203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.617223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.617237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.626607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.626628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.626635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.638340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.638360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.638368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.647179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.647199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.647206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 [2024-07-15 13:01:32.657810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2ff20) 00:27:01.793 [2024-07-15 13:01:32.657830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.793 [2024-07-15 13:01:32.657837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.793 00:27:01.793 Latency(us) 00:27:01.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:01.793 nvme0n1 : 2.00 26189.39 102.30 0.00 0.00 4882.27 2308.01 13563.10 00:27:01.793 =================================================================================================================== 00:27:01.793 Total : 26189.39 102.30 0.00 0.00 4882.27 2308.01 13563.10 00:27:01.793 0 00:27:01.793 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:01.793 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:01.793 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:01.793 | .driver_specific 00:27:01.793 | .nvme_error 00:27:01.793 | .status_code 00:27:01.793 | .command_transient_transport_error' 00:27:01.793 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1863360 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1863360 ']' 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1863360 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1863360 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1863360' 00:27:02.052 killing process with pid 1863360 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1863360 00:27:02.052 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.052 00:27:02.052 Latency(us) 00:27:02.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.052 =================================================================================================================== 00:27:02.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.052 13:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1863360 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1864056 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1864056 /var/tmp/bperf.sock 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1864056 ']' 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:02.310 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.311 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:02.311 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.311 [2024-07-15 13:01:33.144057] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:02.311 [2024-07-15 13:01:33.144105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864056 ] 00:27:02.311 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:02.311 Zero copy mechanism will not be used. 00:27:02.311 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.311 [2024-07-15 13:01:33.209674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.570 [2024-07-15 13:01:33.287571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.140 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.140 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:03.140 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.140 13:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.398 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.398 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.398 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.398 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.398 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.398 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.656 nvme0n1 00:27:03.656 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:03.656 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.656 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.656 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.656 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:03.656 13:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.656 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.656 Zero copy mechanism will not be used. 00:27:03.656 Running I/O for 2 seconds... 00:27:03.656 [2024-07-15 13:01:34.563950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.656 [2024-07-15 13:01:34.571913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.571940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.571949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.656 [2024-07-15 13:01:34.579588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.579618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.579630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.656 [2024-07-15 13:01:34.587377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.587397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.587405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.656 [2024-07-15 13:01:34.594650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.594670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.594678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.656 [2024-07-15 13:01:34.600725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.600746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.600754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.656 [2024-07-15 13:01:34.606743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.656 [2024-07-15 13:01:34.606765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.656 [2024-07-15 13:01:34.606773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.612780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.612801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.612809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.618838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.618858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.618866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.624932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.624952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.630660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.630682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.630689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.638017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.638042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.638050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.645140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.645161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.645169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.651941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.651961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.651969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.658156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.658176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.915 [2024-07-15 13:01:34.658185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.915 [2024-07-15 13:01:34.664342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.915 [2024-07-15 13:01:34.664362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.664370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.670745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.670765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.670774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.676316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.676337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.676345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.682667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.682688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.682696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.688672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.688693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.688700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.694861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.694882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.694890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.700878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.700899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.700907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.708745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.708766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.708773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.716363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.716383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.716391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.723433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.723454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.730374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.730395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.730403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.736944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.736964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.736972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.743575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.743596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.743605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.751794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.751815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.751827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.760919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.760941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.760949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.770048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.770071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.770079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.779571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.779593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.788565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.788587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.788596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.797772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.797794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.797803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.806653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.806675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.806683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.815919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.815940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.815949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.825451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.825474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.825483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.834589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.834612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.834620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.843472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.843494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.843502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.852795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.852817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.852825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.860775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.860796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.860803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.916 [2024-07-15 13:01:34.868860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:03.916 [2024-07-15 13:01:34.868881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.916 [2024-07-15 13:01:34.868890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.876178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.876201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.876209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.882917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.882938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.882947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.889644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.889665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.889673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.896561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.896582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.902601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.902623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.902632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.908396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.908417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.908425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.914061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.914082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.914090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.919834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.919855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.919863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.180 [2024-07-15 13:01:34.925643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.180 [2024-07-15 13:01:34.925665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.180 [2024-07-15 13:01:34.925673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.931271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.931292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.931301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.937890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.937912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.937920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.945180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.945202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.945211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.952793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.952818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.952825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.961042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.961064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.961073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.970396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.970417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.970425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.979878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.979899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.979908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.988855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.988876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.988885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:34.997972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:34.997993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:34.998001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.007361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.007384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.007392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.016822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.016844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.016853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.026050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.026073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.026081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.034836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.034858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.034866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.044288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.044309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.044318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.053012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.053033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.053041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.062011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.062033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.181 [2024-07-15 13:01:35.062041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.181 [2024-07-15 13:01:35.070671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.181 [2024-07-15 13:01:35.070693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.070701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.182 [2024-07-15 13:01:35.080213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.182 [2024-07-15 13:01:35.080242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.080250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.182 [2024-07-15 13:01:35.089175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.182 [2024-07-15 13:01:35.089197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.089205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.182 [2024-07-15 13:01:35.098411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.182 [2024-07-15 13:01:35.098433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.098441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.182 [2024-07-15 13:01:35.107446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.182 [2024-07-15 13:01:35.107468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.107479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.182 [2024-07-15 13:01:35.116502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.182 [2024-07-15 13:01:35.116524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.116533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.182 [2024-07-15 13:01:35.125498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.182 [2024-07-15 13:01:35.125520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.182 [2024-07-15 13:01:35.125529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.446 [2024-07-15 13:01:35.135197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.446 [2024-07-15 13:01:35.135221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.446 [2024-07-15 13:01:35.135235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.446 [2024-07-15 13:01:35.142996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.446 [2024-07-15 13:01:35.143017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.446 [2024-07-15 13:01:35.143025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.446 [2024-07-15 13:01:35.151150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.446 [2024-07-15 13:01:35.151172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.446 [2024-07-15 13:01:35.151180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.446 [2024-07-15 13:01:35.159602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.159623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.159631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.167998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.168019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.168027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.175726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.175748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.175756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.183444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.183469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.183476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.191391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.191415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.191423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.199544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.199566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.199574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.206818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.206844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.206852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.213893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.213916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.213924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.220943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.220967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.220975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.228094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.228117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.235129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.235153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.235161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.243421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.243443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.243451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.251882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.251906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.251914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.260552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.260574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.260581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.269705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.269727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.269735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.278191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.278212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.278220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.286425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.286447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.286455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.294459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.294480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.294488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.302275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.302297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.302306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.309831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.309853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.309861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.317432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.317454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.317469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.326125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.326148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.326156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.334849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.334879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.343849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.343873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.343881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.352018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.352039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.352047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.359350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.359371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.359378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.366512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.366533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.366540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.373432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.373454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.373462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.380190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.380212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.447 [2024-07-15 13:01:35.380220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.447 [2024-07-15 13:01:35.386651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.447 [2024-07-15 13:01:35.386672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.448 [2024-07-15 13:01:35.386680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.448 [2024-07-15 13:01:35.393852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.448 [2024-07-15 13:01:35.393874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.448 [2024-07-15 13:01:35.393882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.400469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.400491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.400499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.407781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.407805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.407813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.414573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.414596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.414604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.422006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.422029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.422038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.429826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.429850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.429859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.437101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.437123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.437131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.444281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.444303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.444316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.451277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.451299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.451307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.458067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.458089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.458097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.464420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.464449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.464457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.468190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.468211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.468219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.473721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.473741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.473749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.479838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.479860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.479868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.485856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.485878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.485887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.491693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.491713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.491721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.497716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.497743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.497751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.503607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.503629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.503637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.509558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.708 [2024-07-15 13:01:35.509578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.708 [2024-07-15 13:01:35.509586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.708 [2024-07-15 13:01:35.515278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.515299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.515307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.521150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.521171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.521178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.526984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.527006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.527015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.532861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.532882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.532890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.538464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.538485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.538493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.543999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.544020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.544028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.549507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.549528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.549536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.555209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.555237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.555245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.561203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.561231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.561239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.567007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.567029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.567037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.572811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.572832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.572840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.578652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.578673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.578682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.584293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.584313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.584320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.589719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.589739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.589747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.595194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.595214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.595231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.600714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.600734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.600742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.606418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.606439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.606447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.612338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.612360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.612367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.618143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.618172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.624336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.624357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.624364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.630275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.630294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.630302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.635735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.635755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.635764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.641499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.641520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.641527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.647699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.647723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.647731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.653749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.653770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.653778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.709 [2024-07-15 13:01:35.660452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.709 [2024-07-15 13:01:35.660474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.709 [2024-07-15 13:01:35.660482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.969 [2024-07-15 13:01:35.668789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.969 [2024-07-15 13:01:35.668812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.969 [2024-07-15 13:01:35.668820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.969 [2024-07-15 13:01:35.677252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.969 [2024-07-15 13:01:35.677273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.969 [2024-07-15 13:01:35.677282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.969 [2024-07-15 13:01:35.686114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.969 [2024-07-15 13:01:35.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.969 [2024-07-15 13:01:35.686145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.969 [2024-07-15 13:01:35.695186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.969 [2024-07-15 13:01:35.695208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.969 [2024-07-15 13:01:35.695216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.969 [2024-07-15 13:01:35.704494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.969 [2024-07-15 13:01:35.704515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.969 [2024-07-15 13:01:35.704524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.969 [2024-07-15 13:01:35.713501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.969 [2024-07-15 13:01:35.713522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.713530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.721681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.721703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.721711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.730657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.730679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.739943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.739964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.739972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.749291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.749312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.749321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.758064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.758085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.758093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.767738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.767768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.767777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.777203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.777242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.777252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.786486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.786509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.786517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.796235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.796257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.796270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.804892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.804915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.804923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.814086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.814108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.814116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.823638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.823660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.823669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.832424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.832446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.832454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.842212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.842240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.851979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.852001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.852009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.861099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.861121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.861129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.869710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.869731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.869739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.878050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.878072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.878080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.886878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.886900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.886909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.896463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.896484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.896492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.906035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.906057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.906065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:04.970 [2024-07-15 13:01:35.914836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:04.970 [2024-07-15 13:01:35.914857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.970 [2024-07-15 13:01:35.914865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.232 [2024-07-15 13:01:35.923874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.232 [2024-07-15 13:01:35.923897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.232 [2024-07-15 13:01:35.923906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.232 [2024-07-15 13:01:35.932357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.232 [2024-07-15 13:01:35.932378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.232 [2024-07-15 13:01:35.932386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.232 [2024-07-15 13:01:35.941754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.232 [2024-07-15 13:01:35.941776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.232 [2024-07-15 13:01:35.941785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:35.951188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:35.951209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:35.951221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:35.959090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:35.959111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:35.959118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:35.967849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:35.967871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:35.967879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:35.976806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:35.976828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:35.976836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:35.985333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:35.985355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:35.985364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:35.994817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:35.994841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:35.994849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.003490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.003513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.003521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.012906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.012928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.012937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.021815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.021836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.021845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.030082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.030108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.030117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.038233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.038254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.038262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.044455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.044476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.044484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.051570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.051591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.051599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.058524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.058545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.058553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.066623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.066644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.066652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.073856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.073885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.081638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.081660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.081668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.088311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.088344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.088353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.095667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.095689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.095697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.103897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.103918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.103927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.111930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.111951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.111958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.119920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.119942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.119950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.128222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.128250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.128258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.137873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.137895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.137904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.146798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.146820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.146829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.155394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.155416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.155424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.163878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.163899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.163911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.173903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.173924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.173932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.233 [2024-07-15 13:01:36.181599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.233 [2024-07-15 13:01:36.181622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.233 [2024-07-15 13:01:36.181631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.493 [2024-07-15 13:01:36.189888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.493 [2024-07-15 13:01:36.189911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.493 [2024-07-15 13:01:36.189920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.493 [2024-07-15 13:01:36.197803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.493 [2024-07-15 13:01:36.197824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.197832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.205016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.205037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.205045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.212262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.212283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.212291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.219336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.219357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.219364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.226063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.226085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.226092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.232911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.232936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.239812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.239833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.239841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.246468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.246489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.246497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.252922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.252943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.252951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.259406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.259427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.259435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.266255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.266276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.266284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.272690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.272711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.272718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.278998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.279020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.279028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.285756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.285777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.285792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.292123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.292144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.292152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.296144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.296165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.296173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.303189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.303210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.303217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.309780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.309800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.316092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.316114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.316122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.323128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.323149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.323157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.329245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.329266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.329275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.336045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.336066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.336074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.342360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.342391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.348548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.348569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.348577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.355111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.355131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.355139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.361576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.361597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.361605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.367940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.367960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.367968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.373887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.373907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.373915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.379925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.379945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.379953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.385208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.385235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.385243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.390700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.390720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.397580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.397600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.397608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.405383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.405404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.405411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.412726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.412747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.412755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.420154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.420175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.420182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.427636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.427657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.427664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.435255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.435275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.435283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.494 [2024-07-15 13:01:36.442104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.494 [2024-07-15 13:01:36.442125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.494 [2024-07-15 13:01:36.442133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.753 [2024-07-15 13:01:36.448765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.753 [2024-07-15 13:01:36.448786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.753 [2024-07-15 13:01:36.448794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.753 [2024-07-15 13:01:36.455612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.753 [2024-07-15 13:01:36.455633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.753 [2024-07-15 13:01:36.455644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.753 [2024-07-15 13:01:36.461702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.753 [2024-07-15 13:01:36.461722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.753 [2024-07-15 13:01:36.461729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.753 [2024-07-15 13:01:36.467982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.753 [2024-07-15 13:01:36.468003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.753 [2024-07-15 13:01:36.468010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.753 [2024-07-15 13:01:36.474352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.753 [2024-07-15 13:01:36.474372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.753 [2024-07-15 13:01:36.474380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.753 [2024-07-15 13:01:36.480385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.480407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.480415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.486313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.486334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.486342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.492596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.492617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.492625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.498324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.498353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.504102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.504124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.504132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.510047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.510073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.510080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.515875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.515895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.515902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.521540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.521560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.521568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.527261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.527282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.527289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.533012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.533041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.538675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.538695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.538703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.544355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.544375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.544382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.550034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.550055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.550062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.754 [2024-07-15 13:01:36.555695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159d0b0) 00:27:05.754 [2024-07-15 13:01:36.555715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.754 [2024-07-15 13:01:36.555723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.754 00:27:05.754 Latency(us) 00:27:05.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.754 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:05.754 nvme0n1 : 2.00 4185.59 523.20 0.00 0.00 3819.11 708.79 10143.83 00:27:05.754 =================================================================================================================== 00:27:05.754 Total : 4185.59 523.20 0.00 0.00 3819.11 708.79 10143.83 00:27:05.754 0 00:27:05.754 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.754 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.754 | .driver_specific 00:27:05.754 | .nvme_error 00:27:05.754 | .status_code 00:27:05.754 | .command_transient_transport_error' 00:27:05.754 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.754 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1864056 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1864056 ']' 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1864056 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1864056 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1864056' 00:27:06.012 killing process with pid 1864056 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1864056 00:27:06.012 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.012 00:27:06.012 Latency(us) 00:27:06.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.012 =================================================================================================================== 00:27:06.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.012 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1864056 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1864726 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1864726 /var/tmp/bperf.sock 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1864726 ']' 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.270 13:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.270 [2024-07-15 13:01:37.040037] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:06.270 [2024-07-15 13:01:37.040086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864726 ] 00:27:06.270 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.270 [2024-07-15 13:01:37.109024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.270 [2024-07-15 13:01:37.187986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.205 13:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.205 13:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:07.205 13:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.205 13:01:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.205 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.205 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.205 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.205 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.205 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.205 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.463 nvme0n1 00:27:07.463 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:07.463 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.463 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.463 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.463 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:07.463 13:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.463 Running I/O for 2 seconds... 00:27:07.463 [2024-07-15 13:01:38.391019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.463 [2024-07-15 13:01:38.391233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.463 [2024-07-15 13:01:38.391263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.463 [2024-07-15 13:01:38.400704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.463 [2024-07-15 13:01:38.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.463 [2024-07-15 13:01:38.400909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.463 [2024-07-15 13:01:38.410323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.463 [2024-07-15 13:01:38.410505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.463 [2024-07-15 13:01:38.410523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.419911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.420106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.420124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.429517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.429695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.429713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.439113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.439322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.439340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.448684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.448864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.448881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.458195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.458384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.458410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.467764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.467940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.467957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.477494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.477671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.477689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.487031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.487208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.487230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.496563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.496754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.506069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.506244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.506261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.515617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.515794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.515811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.525102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.525284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.525301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.534613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.534816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.534836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.544176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.544363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.544381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.553662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.553837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.553854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.563182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.563366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.563387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.572678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.572852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.572869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.582167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.582350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.582368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.591716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.591893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.591910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.601205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.601386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.601403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.610696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.610873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.610890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.620204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.620386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.620403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.629667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.629842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.629860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.639182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.639362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.639379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.648696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.648873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.648890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.658462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.658657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.658683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.720 [2024-07-15 13:01:38.668105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.720 [2024-07-15 13:01:38.668306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.720 [2024-07-15 13:01:38.668323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.677889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.678068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.678085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.687427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.687601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.687619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.697009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.697183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.697203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.706504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.706676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.706693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.716012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.716184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.716201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.725517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.725692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.725710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.735002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.735176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.735193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.744511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.744687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.753995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.754170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.754187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.763489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.763663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.763680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.772995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.773167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.782508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.980 [2024-07-15 13:01:38.782682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.980 [2024-07-15 13:01:38.782699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.980 [2024-07-15 13:01:38.792175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.792381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.792399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.801786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.801960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.801977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.811245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.811419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.811440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.820778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.820953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.820971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.830255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.830428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.830446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.839743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.839918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.839934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.849265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.849440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.849457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.858739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.858913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.858930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.868308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.868480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.868497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.877810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.878001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.887287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.887462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.887479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.896800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.896994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.897018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.906334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.906509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.906526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.916099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.916301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.916319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.981 [2024-07-15 13:01:38.925694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:07.981 [2024-07-15 13:01:38.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.981 [2024-07-15 13:01:38.925914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.935482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.935679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.945058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.945253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.945271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.954586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.954763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.954780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.964056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.964234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.964251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.973582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.973756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.983065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.983245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.983262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:38.992567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:38.992743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:38.992759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.002094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.002269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.002286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.011589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.011761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.011779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.021080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.021255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.021272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.030606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.030780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.030797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.040083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.040258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.040276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.049593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.049768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.049785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.059090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.059265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.059282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.068595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.068768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.068785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.078106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.078289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.078306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.087586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.087761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.087778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.097093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.097267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.097284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.106637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.106813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.106830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.116119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.116298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.116315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.125643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.125818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.125834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.135140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.135341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.135358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.144667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.144841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.144862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.154196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.154379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.154396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.163675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.163848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.173468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.173647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.173665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.183037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.183230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.183248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.240 [2024-07-15 13:01:39.192712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.240 [2024-07-15 13:01:39.192906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.240 [2024-07-15 13:01:39.192932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.202334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.202511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.202528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.211833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.212008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.212025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.221312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.221487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.221504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.230855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.231033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.231050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.240406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.240582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.240599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.249904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.250078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.250097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.259408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.259582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.259599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.268916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.269091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.269109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.278440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.278619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.278636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.287937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.288112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.288130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.297474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.297648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.498 [2024-07-15 13:01:39.297665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.498 [2024-07-15 13:01:39.306993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.498 [2024-07-15 13:01:39.307168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.307185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.316610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.316783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.316801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.326094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.326271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.326288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.335620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.335812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.335829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.345122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.345304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.345321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.354620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.354793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.354811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.364108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.364291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.364308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.373586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.373780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.373797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.383127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.383307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.383325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.392607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.392781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.392798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.402093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.402267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.402286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.411605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.411779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.411797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.421080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.421258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.421275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.430786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.430980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.430998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.440452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.440645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.440670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.499 [2024-07-15 13:01:39.450071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.499 [2024-07-15 13:01:39.450264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.499 [2024-07-15 13:01:39.450281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.459708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.459883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.459901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.469195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.469380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.469397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.478865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.479039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.479060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.488422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.488600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.488618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.498004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.498198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.498229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.507593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.507788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.507815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.517180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.517381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.517409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.527013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.527208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.527231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.536612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.536786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.536804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.546113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.546319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.546337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.555614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.555789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.555806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.565159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.565345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.565362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.574641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.574816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.574832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.584168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.584350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.584368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.593682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.593858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.593875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.603203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.603383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.759 [2024-07-15 13:01:39.603401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.759 [2024-07-15 13:01:39.612712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.759 [2024-07-15 13:01:39.612886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.612903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.622237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.622412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.622428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.631722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.631896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.631914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.641258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.641433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.641450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.650718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.650894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.650911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.660239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.660414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.660431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.669759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.669932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.669949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.679260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.679437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.679454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.689054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.689251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.689270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.698635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.698827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.698844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.760 [2024-07-15 13:01:39.708216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:08.760 [2024-07-15 13:01:39.708417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.760 [2024-07-15 13:01:39.708434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.717951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.718128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.727443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.727639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.737000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.737177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.737194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.746521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.746695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.746711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.756004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.756181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.756198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.765517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.765692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.765708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.775010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.775185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.775202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.784517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.784692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.784711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.794052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.794231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.794249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.803543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.803720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.803736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.813206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.813413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-15 13:01:39.822744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.019 [2024-07-15 13:01:39.822919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.019 [2024-07-15 13:01:39.822936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.832257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.832452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.841833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.842008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.842025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.851316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.851490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.851507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.860806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.860983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.860999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.870419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.870593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.870610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.879893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.880069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.880086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.889400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.889576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.889593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.898940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.899122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.899139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.908490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.908664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.908680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.918042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.918217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.918238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.927525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.927698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.927714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.937033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.937211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.937233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.946721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.946913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.946931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.956301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.956494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.956519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-15 13:01:39.965851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.020 [2024-07-15 13:01:39.966027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.020 [2024-07-15 13:01:39.966044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:39.975670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:39.975866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:39.975892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:39.985249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:39.985442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:39.985459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:39.994807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:39.994982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:39.994999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.004468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.004654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.018473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.018677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.018698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.030171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.030382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.030401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.041731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.041934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.041954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.053084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.053311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.053330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.064341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.064557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.077209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.077400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.077424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.087051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.087236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.087255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.096866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.097044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.097062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.106650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.106829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.106846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.116449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.116627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.116644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.126194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.126380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.126397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.135689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.135884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.135901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.304 [2024-07-15 13:01:40.145275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.304 [2024-07-15 13:01:40.145484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.304 [2024-07-15 13:01:40.145503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.155041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.155236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.155253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.164832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.165017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.165034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.174611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.174786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.174803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.184362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.184544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.194170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.194357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.194374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.203926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.204107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.204124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.213689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.213866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.213884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.223480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.223659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.223675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.233248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.233426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.233445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.243019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.243200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.243218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.305 [2024-07-15 13:01:40.252809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.305 [2024-07-15 13:01:40.252987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.305 [2024-07-15 13:01:40.253005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.572 [2024-07-15 13:01:40.262601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.572 [2024-07-15 13:01:40.262783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.572 [2024-07-15 13:01:40.262801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.272406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.272583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.272600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.282154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.282343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.282361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.291928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.292108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.292125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.301738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.301917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.301934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.311496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.311676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.311693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.321415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.321594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.321612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.331182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.331370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.331397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.340920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.341098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.341116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.350692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.350872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.350890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.360438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.360617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.360634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.370169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.370354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.370371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 [2024-07-15 13:01:40.379950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb14d0) with pdu=0x2000190fd640 00:27:09.573 [2024-07-15 13:01:40.380127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.573 [2024-07-15 13:01:40.380144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.573 00:27:09.573 Latency(us) 00:27:09.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.573 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:09.573 nvme0n1 : 2.00 26353.29 102.94 0.00 0.00 4848.25 4530.53 14702.86 00:27:09.573 =================================================================================================================== 00:27:09.573 Total : 26353.29 102.94 0.00 0.00 4848.25 4530.53 14702.86 00:27:09.573 0 00:27:09.573 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:09.573 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:09.573 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:09.573 | .driver_specific 00:27:09.573 | .nvme_error 00:27:09.573 | .status_code 00:27:09.573 | .command_transient_transport_error' 00:27:09.573 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 207 > 0 )) 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1864726 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1864726 ']' 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1864726 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1864726 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1864726' 00:27:09.831 killing process with pid 1864726 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1864726 00:27:09.831 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.831 00:27:09.831 Latency(us) 00:27:09.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.831 =================================================================================================================== 00:27:09.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.831 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1864726 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1865237 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1865237 /var/tmp/bperf.sock 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1865237 ']' 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.089 13:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.089 [2024-07-15 13:01:40.858102] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:10.089 [2024-07-15 13:01:40.858150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865237 ] 00:27:10.089 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.089 Zero copy mechanism will not be used. 00:27:10.090 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.090 [2024-07-15 13:01:40.925796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.090 [2024-07-15 13:01:41.005311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.025 13:01:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.283 nvme0n1 00:27:11.283 13:01:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:11.283 13:01:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.283 13:01:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.283 13:01:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.283 13:01:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:11.283 13:01:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.283 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.283 Zero copy mechanism will not be used. 00:27:11.283 Running I/O for 2 seconds... 00:27:11.283 [2024-07-15 13:01:42.218978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.283 [2024-07-15 13:01:42.219081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.283 [2024-07-15 13:01:42.219108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.283 [2024-07-15 13:01:42.226049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.283 [2024-07-15 13:01:42.226434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.283 [2024-07-15 13:01:42.226455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.283 [2024-07-15 13:01:42.233743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.283 [2024-07-15 13:01:42.234123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.283 [2024-07-15 13:01:42.234147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.240391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.240759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.240779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.246436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.246530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.246548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.252383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.252731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.252750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.258022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.258371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.258406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.263344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.263698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.263718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.268579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.268930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.268949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.274007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.274369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.274388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.280407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.280755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.280774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.285771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.286089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.286109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.291059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.291398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.291417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.296316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.296638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.296658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.301421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.301746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.301764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.306566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.306901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.306920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.311290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.311616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.311635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.316659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.316979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.316998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.322280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.322612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.322631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.328579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.328914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.328932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.333896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.334209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.339022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.339362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.339384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.344037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.344371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.344390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.543 [2024-07-15 13:01:42.349281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.543 [2024-07-15 13:01:42.349630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.543 [2024-07-15 13:01:42.349649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.355082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.355417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.355436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.361464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.361795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.361814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.367348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.367672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.367691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.372635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.372963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.372982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.378022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.378362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.378381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.383003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.383340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.383359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.388309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.388652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.388671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.393752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.394080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.394099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.399764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.400104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.400123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.405096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.405431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.405449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.410396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.410721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.410739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.415551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.415877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.415895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.420745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.421074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.421092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.425974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.426321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.431216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.431548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.431565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.437311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.437663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.437682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.442845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.443174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.443193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.447993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.448320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.448340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.453119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.453442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.453462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.458386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.458722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.458742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.463353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.463682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.463702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.468416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.468734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.468753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.474414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.474875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.474894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.481788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.482156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.482177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.544 [2024-07-15 13:01:42.488520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.544 [2024-07-15 13:01:42.488921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.544 [2024-07-15 13:01:42.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.496994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.497455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.497476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.505101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.505321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.505339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.513362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.513759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.513778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.521415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.521802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.521822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.529540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.529961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.529981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.538070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.538504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.538526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.546652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.547130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.547150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.554775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.555263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.555283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.561630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.561971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.561991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.569057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.569452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.569472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.575153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.575533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.575553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.580745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.581085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.581105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.585675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.586010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.586030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.590506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.590844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.590865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.595082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.804 [2024-07-15 13:01:42.595420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.804 [2024-07-15 13:01:42.595440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.804 [2024-07-15 13:01:42.599648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.599979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.600002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.604174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.604510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.604530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.608697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.609035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.609054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.613211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.613544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.613564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.617669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.618005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.618025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.622175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.622505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.622525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.626695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.627042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.631165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.631493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.631513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.635719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.636051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.636072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.640207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.640560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.640580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.644778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.645108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.645128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.649302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.649643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.649664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.653892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.654231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.654252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.658387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.658729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.662835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.663163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.663184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.667308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.667630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.667650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.671778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.672119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.672140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.676662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.677003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.677023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.681720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.682062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.682082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.686463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.686790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.686811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.691119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.691463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.691484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.695809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.696145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.696165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.700382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.700720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.700740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.704854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.705188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.705207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.709300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.709624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.709644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.713814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.714157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.714177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.718452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.718778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.718801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.723970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.724305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.724325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.728840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.729179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.805 [2024-07-15 13:01:42.729199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.805 [2024-07-15 13:01:42.733760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.805 [2024-07-15 13:01:42.734120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.806 [2024-07-15 13:01:42.734141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.806 [2024-07-15 13:01:42.739514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.806 [2024-07-15 13:01:42.739852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.806 [2024-07-15 13:01:42.739873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.806 [2024-07-15 13:01:42.745528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.806 [2024-07-15 13:01:42.745993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.806 [2024-07-15 13:01:42.746013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.806 [2024-07-15 13:01:42.753293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:11.806 [2024-07-15 13:01:42.753651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.806 [2024-07-15 13:01:42.753672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.760234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.760663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.760684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.768527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.768975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.768995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.776494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.776947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.776966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.784799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.785200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.785219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.793437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.793789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.793808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.801118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.801498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.801517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.809569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.809979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.809998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.817469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.817837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.817855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.825588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.825959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.825978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.833181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.833559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.833578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.065 [2024-07-15 13:01:42.840700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.065 [2024-07-15 13:01:42.841117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.065 [2024-07-15 13:01:42.841137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.848519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.848903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.848923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.855803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.856117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.856137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.862498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.862918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.862938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.869574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.869952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.869972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.876913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.877281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.877300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.885080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.885477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.885497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.892571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.892956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.892975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.899753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.900116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.900135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.905804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.906181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.906204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.913136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.913523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.913542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.921050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.921405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.921424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.926895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.927182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.927202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.931809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.932086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.932105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.936859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.937145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.937165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.942545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.942831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.942850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.948710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.949000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.953770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.954055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.954074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.959008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.959295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.959314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.964539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.964803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.964822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.968585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.968802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.968820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.972485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.972703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.972722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.976343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.976569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.976588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.980339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.980551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.980569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.984694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.984918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.984938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.988565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.988785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.988804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.066 [2024-07-15 13:01:42.992463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.066 [2024-07-15 13:01:42.992682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.066 [2024-07-15 13:01:42.992704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.067 [2024-07-15 13:01:42.996316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.067 [2024-07-15 13:01:42.996540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.067 [2024-07-15 13:01:42.996558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.067 [2024-07-15 13:01:43.000085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.067 [2024-07-15 13:01:43.000306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.067 [2024-07-15 13:01:43.000325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.067 [2024-07-15 13:01:43.004184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.067 [2024-07-15 13:01:43.004389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.067 [2024-07-15 13:01:43.004407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.067 [2024-07-15 13:01:43.008606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.067 [2024-07-15 13:01:43.008840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.067 [2024-07-15 13:01:43.008859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.067 [2024-07-15 13:01:43.013651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.067 [2024-07-15 13:01:43.013874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.067 [2024-07-15 13:01:43.013894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.067 [2024-07-15 13:01:43.018038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.067 [2024-07-15 13:01:43.018250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.067 [2024-07-15 13:01:43.018270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.022528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.022751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.022770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.026703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.026918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.026938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.031075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.031293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.031312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.035384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.035600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.035620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.039657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.039876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.039894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.043962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.044175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.044194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.327 [2024-07-15 13:01:43.048137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.327 [2024-07-15 13:01:43.048350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.327 [2024-07-15 13:01:43.048369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.052512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.052723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.052742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.056698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.056915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.056935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.061125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.061337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.061355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.065317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.065527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.065546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.069836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.070067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.070086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.074006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.074216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.074241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.078243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.078463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.078482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.082275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.082490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.082511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.086078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.086297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.086315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.089842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.090058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.090077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.093645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.093862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.093881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.097446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.097654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.097671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.101234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.101454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.101477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.105029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.105242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.105260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.108802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.109012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.109029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.112593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.112812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.112830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.116386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.116603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.116622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.120433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.120715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.120734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.125411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.125712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.125731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.130799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.131051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.131071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.136372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.136649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.136669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.142082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.142407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.142425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.148256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.148503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.155336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.155622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.155641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.162621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.162969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.162988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.170072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.170408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.170427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.177462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.177754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.177774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.185358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.185686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.328 [2024-07-15 13:01:43.185705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.328 [2024-07-15 13:01:43.193067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.328 [2024-07-15 13:01:43.193373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.193392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.201011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.201324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.201343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.208831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.209061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.209080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.215456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.215651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.215668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.223397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.223672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.223690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.230633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.230946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.230966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.238452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.238670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.238690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.245267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.245466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.245484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.250945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.251137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.251155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.256498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.256723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.256741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.260766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.260966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.260989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.264816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.265014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.265033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.268713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.268904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.268923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.272535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.272729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.272748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.329 [2024-07-15 13:01:43.276403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.329 [2024-07-15 13:01:43.276595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.329 [2024-07-15 13:01:43.276615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.280284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.280478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.280497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.284056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.284249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.284268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.287920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.288116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.288134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.291701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.291889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.291907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.295481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.295668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.295688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.299259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.299452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.299472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.303001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.303193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.303211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.306806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.306990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.307007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.310840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.589 [2024-07-15 13:01:43.311094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.589 [2024-07-15 13:01:43.311114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.589 [2024-07-15 13:01:43.316366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.316640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.316659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.323188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.323429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.323448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.331172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.331429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.331448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.338425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.338724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.338747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.345063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.345329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.345348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.351659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.351953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.351972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.358930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.359181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.359201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.365555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.365868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.365887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.372705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.373024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.373044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.381014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.381391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.381410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.388454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.388696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.388717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.395499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.395779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.395799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.403080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.403313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.403331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.410856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.411068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.411088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.417452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.417666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.417685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.424359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.424661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.424680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.430243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.430476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.430495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.437168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.437420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.437439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.443706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.444009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.449025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.449276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.449295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.453785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.453984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.454001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.458341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.458593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.458612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.463629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.463873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.463892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.469477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.469683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.469701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.474199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.474431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.474450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.478636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.478876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.478895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.483139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.483439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.483459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.488584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.488839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.488858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.493698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.493946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.590 [2024-07-15 13:01:43.493965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.590 [2024-07-15 13:01:43.498347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.590 [2024-07-15 13:01:43.498566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.498589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.503239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.503472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.507753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.507968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.507987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.512310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.512541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.512560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.516471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.516686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.520697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.520940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.520959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.525662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.525928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.530840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.531054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.591 [2024-07-15 13:01:43.536624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.591 [2024-07-15 13:01:43.536874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.591 [2024-07-15 13:01:43.536894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.851 [2024-07-15 13:01:43.543133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.851 [2024-07-15 13:01:43.543468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.851 [2024-07-15 13:01:43.543489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.851 [2024-07-15 13:01:43.549506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.851 [2024-07-15 13:01:43.549772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.851 [2024-07-15 13:01:43.549791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.851 [2024-07-15 13:01:43.555316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.851 [2024-07-15 13:01:43.555621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.851 [2024-07-15 13:01:43.555641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.851 [2024-07-15 13:01:43.561741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.851 [2024-07-15 13:01:43.561995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.562014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.568010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.568347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.568366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.575032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.575240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.575259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.580866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.581080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.581099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.586619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.586823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.586843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.593272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.593472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.593490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.599465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.599670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.599689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.606054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.606255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.606273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.612635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.612879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.612898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.618524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.618735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.624474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.624674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.624691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.629989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.630203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.634927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.635129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.635147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.639273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.639472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.639490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.643132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.643336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.643359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.646983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.647177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.647195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.650825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.651022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.651040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.655506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.655700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.655718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.659531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.659724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.659745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.663364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.663552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.663569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.667523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.667713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.667731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.672220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.672435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.678235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.678450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.678469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.683784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.684008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.684029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.689519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.689752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.689772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.694520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.694764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.694783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.700469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.700781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.700800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.707830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.708128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.852 [2024-07-15 13:01:43.708147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.852 [2024-07-15 13:01:43.714030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.852 [2024-07-15 13:01:43.714302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.714321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.718709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.718929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.718948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.722812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.723009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.723027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.727010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.727244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.727263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.731399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.731677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.731696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.735308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.735502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.735520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.739155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.739350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.739369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.743011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.743205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.743223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.746870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.747059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.747078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.750673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.750870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.750891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.754425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.754619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.754639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.758179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.758369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.758387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.762294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.762510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.762535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.766400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.766593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.766611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.770990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.771206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.771231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.775920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.776115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.776133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.781002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.781187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.781206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.786517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.786758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.786778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.792781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.793003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.793021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.853 [2024-07-15 13:01:43.799081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:12.853 [2024-07-15 13:01:43.799286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.853 [2024-07-15 13:01:43.799304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.114 [2024-07-15 13:01:43.805857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.806081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.806102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.811937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.812176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.812196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.818461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.818669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.818689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.825062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.825270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.825288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.832031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.832230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.832249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.837988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.838183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.838201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.843214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.843418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.843435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.848868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.849065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.849082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.854781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.854998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.859365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.859553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.859574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.863593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.863791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.863809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.867571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.867762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.867781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.871484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.871684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.871703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.875424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.875618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.879348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.879551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.879570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.883308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.883503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.887238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.887441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.887462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.891382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.891585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.891603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.896537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.896730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.896750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.901543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.901743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.905897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.905980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.906000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.910381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.910572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.914559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.914757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.914776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.919150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.919362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.919381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.924546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.924745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.924764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.929829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.930017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.930036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.934043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.934242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.934261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.938458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.938649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.938668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.942594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.115 [2024-07-15 13:01:43.942809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.115 [2024-07-15 13:01:43.942829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.115 [2024-07-15 13:01:43.946644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.946847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.946866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.950625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.950824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.950844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.954541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.954748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.954768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.958459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.958672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.958691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.962387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.962582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.962602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.966651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.966848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.966867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.970875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.971068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.971094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.975093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.975288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.975307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.979385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.979575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.979594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.983716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.983916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.983936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.988273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.988469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.988488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.992680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.992874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.992894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:43.996970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:43.997169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:43.997189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.001366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.001570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.001589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.006391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.006588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.006607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.010697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.010887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.015038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.015216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.015240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.019360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.019555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.019575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.023523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.023711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.023730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.027804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.027975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.027994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.033050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.033223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.033248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.037978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.038180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.038200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.042439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.042601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.042620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.047588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.047757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.047776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.052280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.052461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.052480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.056477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.056666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.056685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.060472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.060640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.060659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.116 [2024-07-15 13:01:44.064769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.116 [2024-07-15 13:01:44.064964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.116 [2024-07-15 13:01:44.064982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.376 [2024-07-15 13:01:44.069107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.069296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.069315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.073427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.073591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.073611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.077927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.078127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.078147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.082300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.082478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.082498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.086861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.087037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.087060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.091452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.091631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.091651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.095904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.096091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.096110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.100143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.100314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.100334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.104482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.104659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.104678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.108703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.108884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.108903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.113072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.113279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.113298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.117449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.117643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.117662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.121856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.122029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.122048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.126142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.126326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.126346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.130399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.130580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.130600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.134617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.134821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.139057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.139252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.139272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.143355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.143540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.143559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.147645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.147817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.147836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.152427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.152621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.152640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.156672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.156843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.156861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.160734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.160952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.160970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.164947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.165119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.165137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.169111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.169294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.169311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.173316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.173503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.173520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.177370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.377 [2024-07-15 13:01:44.177538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.377 [2024-07-15 13:01:44.177559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.377 [2024-07-15 13:01:44.181577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.181761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.181778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.186217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.186385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.186403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.191283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.191475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.191496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.195669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.195841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.195861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.200189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.200379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.200403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.204353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.204546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.204565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.208649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.208817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.208835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.378 [2024-07-15 13:01:44.212767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb1810) with pdu=0x2000190fef90 00:27:13.378 [2024-07-15 13:01:44.212872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.378 [2024-07-15 13:01:44.212890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.378 00:27:13.378 Latency(us) 00:27:13.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.378 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:13.378 nvme0n1 : 2.00 5870.60 733.83 0.00 0.00 2721.26 1795.12 9630.94 00:27:13.378 =================================================================================================================== 00:27:13.378 Total : 5870.60 733.83 0.00 0.00 2721.26 1795.12 9630.94 00:27:13.378 0 00:27:13.378 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:13.378 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:13.378 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:13.378 | .driver_specific 00:27:13.378 | .nvme_error 00:27:13.378 | .status_code 00:27:13.378 | .command_transient_transport_error' 00:27:13.378 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 379 > 0 )) 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1865237 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1865237 ']' 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1865237 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1865237 00:27:13.636 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:13.637 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:13.637 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1865237' 00:27:13.637 killing process with pid 1865237 00:27:13.637 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1865237 00:27:13.637 Received shutdown signal, test time was about 2.000000 seconds 00:27:13.637 00:27:13.637 Latency(us) 00:27:13.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.637 =================================================================================================================== 00:27:13.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:13.637 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1865237 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1863197 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1863197 ']' 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1863197 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1863197 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1863197' 00:27:13.895 killing process with pid 1863197 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1863197 00:27:13.895 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1863197 00:27:14.154 00:27:14.154 real 0m16.656s 00:27:14.154 user 0m31.960s 00:27:14.154 sys 0m4.516s 00:27:14.154 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.154 13:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.154 ************************************ 00:27:14.154 END TEST nvmf_digest_error 00:27:14.154 ************************************ 00:27:14.154 13:01:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.155 rmmod nvme_tcp 00:27:14.155 rmmod nvme_fabrics 00:27:14.155 rmmod nvme_keyring 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1863197 ']' 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1863197 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1863197 ']' 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1863197 00:27:14.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1863197) - No such process 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1863197 is not found' 00:27:14.155 Process with pid 1863197 is not found 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.155 13:01:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.686 13:01:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.686 00:27:16.686 real 0m41.884s 00:27:16.686 user 1m6.613s 00:27:16.686 sys 0m13.363s 00:27:16.686 13:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.686 13:01:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:16.686 ************************************ 00:27:16.686 END TEST nvmf_digest 00:27:16.686 ************************************ 00:27:16.686 13:01:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:16.686 13:01:47 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:27:16.686 13:01:47 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:27:16.686 13:01:47 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:27:16.686 13:01:47 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:16.686 13:01:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:16.686 13:01:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.686 13:01:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.686 ************************************ 00:27:16.686 START TEST nvmf_bdevperf 00:27:16.686 ************************************ 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:16.686 * Looking for test storage... 00:27:16.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.686 13:01:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.687 13:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:21.958 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:21.959 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:21.959 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:21.959 Found net devices under 0000:86:00.0: cvl_0_0 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:21.959 Found net devices under 0000:86:00.1: cvl_0_1 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.959 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:22.219 00:27:22.219 --- 10.0.0.2 ping statistics --- 00:27:22.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.219 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:27:22.219 00:27:22.219 --- 10.0.0.1 ping statistics --- 00:27:22.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.219 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1869458 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1869458 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1869458 ']' 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.219 13:01:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.219 [2024-07-15 13:01:53.038124] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:22.219 [2024-07-15 13:01:53.038164] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.219 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.219 [2024-07-15 13:01:53.093490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.219 [2024-07-15 13:01:53.171320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.219 [2024-07-15 13:01:53.171361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.219 [2024-07-15 13:01:53.171368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.219 [2024-07-15 13:01:53.171374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.219 [2024-07-15 13:01:53.171379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.219 [2024-07-15 13:01:53.171443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.219 [2024-07-15 13:01:53.171551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.219 [2024-07-15 13:01:53.171553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.156 [2024-07-15 13:01:53.906901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.156 Malloc0 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:23.156 [2024-07-15 13:01:53.970077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.156 { 00:27:23.156 "params": { 00:27:23.156 "name": "Nvme$subsystem", 00:27:23.156 "trtype": "$TEST_TRANSPORT", 00:27:23.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.156 "adrfam": "ipv4", 00:27:23.156 "trsvcid": "$NVMF_PORT", 00:27:23.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.156 "hdgst": ${hdgst:-false}, 00:27:23.156 "ddgst": ${ddgst:-false} 00:27:23.156 }, 00:27:23.156 "method": "bdev_nvme_attach_controller" 00:27:23.156 } 00:27:23.156 EOF 00:27:23.156 )") 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:23.156 13:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:23.156 "params": { 00:27:23.156 "name": "Nvme1", 00:27:23.156 "trtype": "tcp", 00:27:23.156 "traddr": "10.0.0.2", 00:27:23.156 "adrfam": "ipv4", 00:27:23.156 "trsvcid": "4420", 00:27:23.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:23.156 "hdgst": false, 00:27:23.156 "ddgst": false 00:27:23.156 }, 00:27:23.156 "method": "bdev_nvme_attach_controller" 00:27:23.156 }' 00:27:23.156 [2024-07-15 13:01:54.018788] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:23.156 [2024-07-15 13:01:54.018834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869508 ] 00:27:23.156 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.156 [2024-07-15 13:01:54.085672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.416 [2024-07-15 13:01:54.159881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.674 Running I/O for 1 seconds... 00:27:24.611 00:27:24.611 Latency(us) 00:27:24.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.611 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:24.611 Verification LBA range: start 0x0 length 0x4000 00:27:24.611 Nvme1n1 : 1.01 10966.27 42.84 0.00 0.00 11629.93 1951.83 12765.27 00:27:24.611 =================================================================================================================== 00:27:24.611 Total : 10966.27 42.84 0.00 0.00 11629.93 1951.83 12765.27 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1869881 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.871 { 00:27:24.871 "params": { 00:27:24.871 "name": "Nvme$subsystem", 00:27:24.871 "trtype": "$TEST_TRANSPORT", 00:27:24.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.871 "adrfam": "ipv4", 00:27:24.871 "trsvcid": "$NVMF_PORT", 00:27:24.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.871 "hdgst": ${hdgst:-false}, 00:27:24.871 "ddgst": ${ddgst:-false} 00:27:24.871 }, 00:27:24.871 "method": "bdev_nvme_attach_controller" 00:27:24.871 } 00:27:24.871 EOF 00:27:24.871 )") 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:24.871 13:01:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:24.871 "params": { 00:27:24.871 "name": "Nvme1", 00:27:24.871 "trtype": "tcp", 00:27:24.871 "traddr": "10.0.0.2", 00:27:24.871 "adrfam": "ipv4", 00:27:24.871 "trsvcid": "4420", 00:27:24.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.871 "hdgst": false, 00:27:24.871 "ddgst": false 00:27:24.871 }, 00:27:24.871 "method": "bdev_nvme_attach_controller" 00:27:24.871 }' 00:27:24.871 [2024-07-15 13:01:55.718251] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:24.871 [2024-07-15 13:01:55.718303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869881 ] 00:27:24.871 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.871 [2024-07-15 13:01:55.782992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.130 [2024-07-15 13:01:55.856066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.130 Running I/O for 15 seconds... 00:27:28.458 13:01:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1869458 00:27:28.458 13:01:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:28.458 [2024-07-15 13:01:58.688806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 13:01:58.688845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 13:01:58.688873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.688996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.458 [2024-07-15 13:01:58.689411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 13:01:58.689419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.459 [2024-07-15 13:01:58.689950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 13:01:58.689958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.689964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.689972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.689979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.689987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.689994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.460 [2024-07-15 13:01:58.690525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 13:01:58.690703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 13:01:58.690709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 13:01:58.690958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.690966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1c70 is same with the state(5) to be set 00:27:28.461 [2024-07-15 13:01:58.690975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:28.461 [2024-07-15 13:01:58.690980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:28.461 [2024-07-15 13:01:58.690987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106024 len:8 PRP1 0x0 PRP2 0x0 00:27:28.461 [2024-07-15 13:01:58.690993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 13:01:58.691037] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a1c70 was disconnected and freed. reset controller. 00:27:28.461 [2024-07-15 13:01:58.693876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.461 [2024-07-15 13:01:58.693927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.461 [2024-07-15 13:01:58.694476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.461 [2024-07-15 13:01:58.694493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.461 [2024-07-15 13:01:58.694501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.461 [2024-07-15 13:01:58.694679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.461 [2024-07-15 13:01:58.694856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.461 [2024-07-15 13:01:58.694865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.461 [2024-07-15 13:01:58.694873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.461 [2024-07-15 13:01:58.697712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.461 [2024-07-15 13:01:58.707077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.461 [2024-07-15 13:01:58.707539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.461 [2024-07-15 13:01:58.707584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.461 [2024-07-15 13:01:58.707606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.461 [2024-07-15 13:01:58.708150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.461 [2024-07-15 13:01:58.708324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.461 [2024-07-15 13:01:58.708334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.461 [2024-07-15 13:01:58.708340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.461 [2024-07-15 13:01:58.710931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.461 [2024-07-15 13:01:58.719973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.461 [2024-07-15 13:01:58.720418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.461 [2024-07-15 13:01:58.720435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.461 [2024-07-15 13:01:58.720442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.461 [2024-07-15 13:01:58.720605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.461 [2024-07-15 13:01:58.720769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.461 [2024-07-15 13:01:58.720779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.461 [2024-07-15 13:01:58.720786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.461 [2024-07-15 13:01:58.723393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.461 [2024-07-15 13:01:58.732886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.461 [2024-07-15 13:01:58.733314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.461 [2024-07-15 13:01:58.733331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.461 [2024-07-15 13:01:58.733338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.461 [2024-07-15 13:01:58.733501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.461 [2024-07-15 13:01:58.733666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.461 [2024-07-15 13:01:58.733676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.461 [2024-07-15 13:01:58.733682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.461 [2024-07-15 13:01:58.736277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.461 [2024-07-15 13:01:58.745761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.461 [2024-07-15 13:01:58.746184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.746237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.746262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.746740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.746905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.746915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.746923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.749515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.758565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.758977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.758993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.759000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.759167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.759335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.759345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.759351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.762055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.771407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.771821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.771838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.771845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.772007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.772170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.772179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.772185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.774802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.784287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.784720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.784737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.784744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.784905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.785069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.785078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.785084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.787686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.797185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.797557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.797574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.797581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.797743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.797906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.797915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.797925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.800521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.810110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.810554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.810571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.810577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.810740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.810903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.810912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.810919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.813517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.822974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.823420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.823438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.823445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.823607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.823771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.823780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.823786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.826388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.835867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.836305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.836322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.836329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.462 [2024-07-15 13:01:58.836491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.462 [2024-07-15 13:01:58.836654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.462 [2024-07-15 13:01:58.836664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.462 [2024-07-15 13:01:58.836670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.462 [2024-07-15 13:01:58.839270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.462 [2024-07-15 13:01:58.848748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.462 [2024-07-15 13:01:58.849190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.462 [2024-07-15 13:01:58.849252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.462 [2024-07-15 13:01:58.849275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.849747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.849911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.849920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.849926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.852522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.861553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.861970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.861987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.861993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.862156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.862326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.862335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.862342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.864932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.874451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.874891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.874935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.874957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.875517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.875681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.875690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.875696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.878286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.887319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.887747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.887763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.887770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.887933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.888100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.888109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.888115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.890716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.900235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.900601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.900619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.900627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.900790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.900955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.900964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.900972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.903592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.913167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.913548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.913592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.913614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.914055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.914220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.914233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.914239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.916828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.926031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.926493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.926536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.926558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.927055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.927220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.927234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.927241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.929834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.938865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.939316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.939334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.939342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.939514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.939687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.939697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.939703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.942556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.951892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.952339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.952357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.952365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.952537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.952709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.952719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.952725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.955476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.964870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.965292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.463 [2024-07-15 13:01:58.965309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.463 [2024-07-15 13:01:58.965317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.463 [2024-07-15 13:01:58.965489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.463 [2024-07-15 13:01:58.965663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.463 [2024-07-15 13:01:58.965673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.463 [2024-07-15 13:01:58.965679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.463 [2024-07-15 13:01:58.968427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.463 [2024-07-15 13:01:58.977683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.463 [2024-07-15 13:01:58.978122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:58.978166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:58.978196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:58.978792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:58.979306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:58.979325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:58.979339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:58.985566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:58.992685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:58.993239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:58.993283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:58.993304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:58.993799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:58.994055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:58.994068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:58.994078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:58.998135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.005678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.006096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.006113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.006120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.006293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.006462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.006472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.006478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.009138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.018489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.018929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.018970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.018994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.019402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.019567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.019579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.019585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.022175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.031362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.031815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.031857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.031878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.032236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.032401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.032410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.032417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.035006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.044183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.044636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.044679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.044700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.045213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.045385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.045394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.045400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.047991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.057017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.057459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.057475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.057482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.057644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.057807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.057816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.057822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.060452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.069940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.070353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.070369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.070376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.070539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.070702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.070712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.070718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.073311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.082801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.083157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.083173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.083182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.464 [2024-07-15 13:01:59.083352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.464 [2024-07-15 13:01:59.083516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.464 [2024-07-15 13:01:59.083525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.464 [2024-07-15 13:01:59.083531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.464 [2024-07-15 13:01:59.086119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.464 [2024-07-15 13:01:59.095598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.464 [2024-07-15 13:01:59.096019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.464 [2024-07-15 13:01:59.096061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.464 [2024-07-15 13:01:59.096084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.096538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.096703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.096712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.096719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.099310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.108407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.108775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.108801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.108963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.109126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.109136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.109142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.111739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.121301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.121739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.121755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.121763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.121925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.122088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.122097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.122103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.124705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.134192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.134643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.134686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.134707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.135214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.135383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.135393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.135399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.138118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.146997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.147361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.147402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.147425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.148005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.148600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.148612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.148619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.151207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.159926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.160346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.160379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.160387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.160559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.160734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.160743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.160749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.163402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.172740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.173183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.173241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.173265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.173751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.173915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.173925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.173931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.176552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.185641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.186051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.186067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.186076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.186246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.186410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.186420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.186426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.189082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.198788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.199251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.199294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.199317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.199769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.199944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.199953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.199959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.202733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.211610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.212022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.212039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.212046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.212209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.212380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.212390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.212396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.214985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.224465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.224917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.465 [2024-07-15 13:01:59.224960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.465 [2024-07-15 13:01:59.224981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.465 [2024-07-15 13:01:59.225537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.465 [2024-07-15 13:01:59.225701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.465 [2024-07-15 13:01:59.225711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.465 [2024-07-15 13:01:59.225717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.465 [2024-07-15 13:01:59.228309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.465 [2024-07-15 13:01:59.237333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.465 [2024-07-15 13:01:59.237764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.237803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.237827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.238406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.238572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.238582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.238588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.241178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.250209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.250622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.250638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.250645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.250807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.250971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.250980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.250986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.253580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.263065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.263515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.263559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.263582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.264161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.264759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.264769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.264774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.267367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.275928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.276367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.276384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.276391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.276555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.276718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.276727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.276737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.279333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.288813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.289242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.289285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.289307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.289801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.289965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.289974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.289981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.292576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.301599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.302040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.302083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.302104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.302652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.303041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.303058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.303072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.309302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.316791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.317324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.317368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.317390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.317969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.318266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.318279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.318289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.322349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.329750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.330189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.330209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.330216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.330390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.330559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.330568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.330574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.333241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.342576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.343002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.343044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.343067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.343662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.344157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.344166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.344172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.346763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.355486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.355895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.355911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.466 [2024-07-15 13:01:59.355918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.466 [2024-07-15 13:01:59.356081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.466 [2024-07-15 13:01:59.356250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.466 [2024-07-15 13:01:59.356261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.466 [2024-07-15 13:01:59.356267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.466 [2024-07-15 13:01:59.358861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.466 [2024-07-15 13:01:59.368416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.466 [2024-07-15 13:01:59.368758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.466 [2024-07-15 13:01:59.368774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.467 [2024-07-15 13:01:59.368781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.467 [2024-07-15 13:01:59.368944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.467 [2024-07-15 13:01:59.369109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.467 [2024-07-15 13:01:59.369118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.467 [2024-07-15 13:01:59.369124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.467 [2024-07-15 13:01:59.371884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.782 [2024-07-15 13:01:59.381610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.782 [2024-07-15 13:01:59.382102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.782 [2024-07-15 13:01:59.382145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.782 [2024-07-15 13:01:59.382167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.782 [2024-07-15 13:01:59.382706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.782 [2024-07-15 13:01:59.382886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.782 [2024-07-15 13:01:59.382896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.782 [2024-07-15 13:01:59.382902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.782 [2024-07-15 13:01:59.385732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.783 [2024-07-15 13:01:59.394669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.783 [2024-07-15 13:01:59.395081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.783 [2024-07-15 13:01:59.395098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.783 [2024-07-15 13:01:59.395106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.783 [2024-07-15 13:01:59.395281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.783 [2024-07-15 13:01:59.395455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.783 [2024-07-15 13:01:59.395464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.783 [2024-07-15 13:01:59.395471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.783 [2024-07-15 13:01:59.398173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.783 [2024-07-15 13:01:59.407600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.783 [2024-07-15 13:01:59.408048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.783 [2024-07-15 13:01:59.408089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.783 [2024-07-15 13:01:59.408110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.783 [2024-07-15 13:01:59.408700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.783 [2024-07-15 13:01:59.408903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.783 [2024-07-15 13:01:59.408913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.783 [2024-07-15 13:01:59.408919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.783 [2024-07-15 13:01:59.411524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.783 [2024-07-15 13:01:59.420525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.783 [2024-07-15 13:01:59.420979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.783 [2024-07-15 13:01:59.420996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.783 [2024-07-15 13:01:59.421003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.783 [2024-07-15 13:01:59.421175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.783 [2024-07-15 13:01:59.421356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.783 [2024-07-15 13:01:59.421366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.783 [2024-07-15 13:01:59.421375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.783 [2024-07-15 13:01:59.424135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.783 [2024-07-15 13:01:59.433693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.783 [2024-07-15 13:01:59.434144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.783 [2024-07-15 13:01:59.434190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.783 [2024-07-15 13:01:59.434212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.783 [2024-07-15 13:01:59.434809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.783 [2024-07-15 13:01:59.435339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.783 [2024-07-15 13:01:59.435350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.783 [2024-07-15 13:01:59.435357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.783 [2024-07-15 13:01:59.438202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.783 [2024-07-15 13:01:59.446578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.783 [2024-07-15 13:01:59.447034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.783 [2024-07-15 13:01:59.447051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.785 [2024-07-15 13:01:59.447058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.785 [2024-07-15 13:01:59.447235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.785 [2024-07-15 13:01:59.447408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.785 [2024-07-15 13:01:59.447417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.785 [2024-07-15 13:01:59.447424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.785 [2024-07-15 13:01:59.450240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.785 [2024-07-15 13:01:59.459679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.785 [2024-07-15 13:01:59.460137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.785 [2024-07-15 13:01:59.460154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.785 [2024-07-15 13:01:59.460165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.785 [2024-07-15 13:01:59.460345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.785 [2024-07-15 13:01:59.460519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.785 [2024-07-15 13:01:59.460529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.785 [2024-07-15 13:01:59.460535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.785 [2024-07-15 13:01:59.463280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.785 [2024-07-15 13:01:59.472663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.785 [2024-07-15 13:01:59.473108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.785 [2024-07-15 13:01:59.473125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.785 [2024-07-15 13:01:59.473132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.785 [2024-07-15 13:01:59.473311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.785 [2024-07-15 13:01:59.473492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.785 [2024-07-15 13:01:59.473501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.785 [2024-07-15 13:01:59.473508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.785 [2024-07-15 13:01:59.476101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.785 [2024-07-15 13:01:59.485481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.785 [2024-07-15 13:01:59.485918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.785 [2024-07-15 13:01:59.485936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.785 [2024-07-15 13:01:59.485943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.785 [2024-07-15 13:01:59.486106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.785 [2024-07-15 13:01:59.486276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.785 [2024-07-15 13:01:59.486286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.785 [2024-07-15 13:01:59.486293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.785 [2024-07-15 13:01:59.488941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.785 [2024-07-15 13:01:59.498420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.785 [2024-07-15 13:01:59.498805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.785 [2024-07-15 13:01:59.498850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.785 [2024-07-15 13:01:59.498873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.785 [2024-07-15 13:01:59.499409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.785 [2024-07-15 13:01:59.499584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.785 [2024-07-15 13:01:59.499597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.786 [2024-07-15 13:01:59.499606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.786 [2024-07-15 13:01:59.502355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.786 [2024-07-15 13:01:59.511285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.786 [2024-07-15 13:01:59.511717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.786 [2024-07-15 13:01:59.511733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.786 [2024-07-15 13:01:59.511740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.786 [2024-07-15 13:01:59.511903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.786 [2024-07-15 13:01:59.512067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.786 [2024-07-15 13:01:59.512076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.786 [2024-07-15 13:01:59.512082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.786 [2024-07-15 13:01:59.514677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.786 [2024-07-15 13:01:59.524193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.786 [2024-07-15 13:01:59.524539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.524556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.524563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.524725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.524889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.524898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.524904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.527506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.537005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.537379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.537396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.537402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.537565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.537728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.537737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.537743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.540334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.549821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.550263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.550281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.550289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.550451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.550614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.550623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.550629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.553232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.562745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.563187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.563243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.563268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.563665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.563830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.563839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.563845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.566443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.575633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.576084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.576127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.576149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.576626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.576791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.576801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.576807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.579489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.588523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.588949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.588965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.588972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.589138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.589307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.589317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.589323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.591914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.601341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.601802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.601844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.601866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.602384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.602549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.602558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.602564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.605302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.614219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.787 [2024-07-15 13:01:59.614654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.787 [2024-07-15 13:01:59.614669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.787 [2024-07-15 13:01:59.614676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.787 [2024-07-15 13:01:59.614839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.787 [2024-07-15 13:01:59.615005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.787 [2024-07-15 13:01:59.615014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.787 [2024-07-15 13:01:59.615020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.787 [2024-07-15 13:01:59.617615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.787 [2024-07-15 13:01:59.627105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.627573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.627617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.627639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.628216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.628787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.628796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.628806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.631397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.639963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.640398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.640415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.640422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.640585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.640749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.640759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.640765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.643400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.652889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.653233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.653250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.653258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.653421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.653586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.653597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.653603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.656193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.665775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.666084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.666101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.666108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.666275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.666438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.666447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.666454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.669042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.678684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.679061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.679077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.679083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.679252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.679417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.679426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.679432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.682021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.691511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.691860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.691877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.691884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.692047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.692209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.692217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.692223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.694817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.704410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.704863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.704880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.704887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.705064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.705248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.705259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.705266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.708099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.717575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.717962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.718007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.718030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.718632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.718806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.788 [2024-07-15 13:01:59.718816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.788 [2024-07-15 13:01:59.718822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.788 [2024-07-15 13:01:59.721618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.788 [2024-07-15 13:01:59.730593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.788 [2024-07-15 13:01:59.730993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.788 [2024-07-15 13:01:59.731011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:28.788 [2024-07-15 13:01:59.731018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:28.788 [2024-07-15 13:01:59.731190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:28.788 [2024-07-15 13:01:59.731367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.789 [2024-07-15 13:01:59.731376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.789 [2024-07-15 13:01:59.731383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.734197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.743718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.744149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.744166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.744174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.744357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.744537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.744547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.744554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.747383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.756892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.757278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.757296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.757304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.757482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.757660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.757670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.757680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.760508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.770024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.770484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.770501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.770509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.770687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.770866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.770876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.770884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.773713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.783106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.783541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.783559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.783567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.783744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.783923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.783932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.783939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.786774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.796303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.796683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.796700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.796708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.796885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.797064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.797074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.797080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.799907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.809434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.809795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.809815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.809823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.810000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.810179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.810189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.810195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.813023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.822553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.822929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.822947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.822955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.823132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.823316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.823326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.823333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.826161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.835675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.836107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.836125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.836132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.836314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.058 [2024-07-15 13:01:59.836491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.058 [2024-07-15 13:01:59.836501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.058 [2024-07-15 13:01:59.836508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.058 [2024-07-15 13:01:59.839337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.058 [2024-07-15 13:01:59.848854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.058 [2024-07-15 13:01:59.849295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.058 [2024-07-15 13:01:59.849313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.058 [2024-07-15 13:01:59.849320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.058 [2024-07-15 13:01:59.849498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.849680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.849690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.849696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.852540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.861895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.862281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.862299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.862307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.862484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.862662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.862672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.862679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.865513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.875034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.875422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.875439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.875446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.875624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.875803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.875812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.875820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.878650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.888158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.888586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.888603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.888611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.888790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.888969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.888978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.888985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.891821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.901342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.901721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.901737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.901745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.901922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.902099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.902108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.902115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.904944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.914465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.914840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.914858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.914866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.915043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.915223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.915237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.915243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.918071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.927615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.928045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.928063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.928071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.928253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.928437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.928448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.928455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.931286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.940803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.941217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.941241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.941252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.941430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.941610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.941620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.941626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.944456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.953969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.954362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.954379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.954386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.954564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.059 [2024-07-15 13:01:59.954741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.059 [2024-07-15 13:01:59.954751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.059 [2024-07-15 13:01:59.954757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.059 [2024-07-15 13:01:59.957589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.059 [2024-07-15 13:01:59.967113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.059 [2024-07-15 13:01:59.967567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.059 [2024-07-15 13:01:59.967584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.059 [2024-07-15 13:01:59.967592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.059 [2024-07-15 13:01:59.967770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.060 [2024-07-15 13:01:59.967948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.060 [2024-07-15 13:01:59.967958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.060 [2024-07-15 13:01:59.967964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.060 [2024-07-15 13:01:59.970792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.060 [2024-07-15 13:01:59.980281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.060 [2024-07-15 13:01:59.980734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-07-15 13:01:59.980752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.060 [2024-07-15 13:01:59.980759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.060 [2024-07-15 13:01:59.980942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.060 [2024-07-15 13:01:59.981125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.060 [2024-07-15 13:01:59.981138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.060 [2024-07-15 13:01:59.981145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.060 [2024-07-15 13:01:59.983991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.060 [2024-07-15 13:01:59.993455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.060 [2024-07-15 13:01:59.993884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-07-15 13:01:59.993900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.060 [2024-07-15 13:01:59.993908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.060 [2024-07-15 13:01:59.994085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.060 [2024-07-15 13:01:59.994269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.060 [2024-07-15 13:01:59.994279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.060 [2024-07-15 13:01:59.994286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.060 [2024-07-15 13:01:59.997112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.060 [2024-07-15 13:02:00.006674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.060 [2024-07-15 13:02:00.007158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.060 [2024-07-15 13:02:00.007177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.060 [2024-07-15 13:02:00.007185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.060 [2024-07-15 13:02:00.007376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.060 [2024-07-15 13:02:00.007561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.060 [2024-07-15 13:02:00.007570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.060 [2024-07-15 13:02:00.007577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.319 [2024-07-15 13:02:00.011108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 [2024-07-15 13:02:00.019760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.319 [2024-07-15 13:02:00.020120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-15 13:02:00.020138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-15 13:02:00.020146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.319 [2024-07-15 13:02:00.020329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.319 [2024-07-15 13:02:00.020508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.319 [2024-07-15 13:02:00.020517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.319 [2024-07-15 13:02:00.020524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.319 [2024-07-15 13:02:00.023366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 [2024-07-15 13:02:00.032897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.319 [2024-07-15 13:02:00.033328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-15 13:02:00.033346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-15 13:02:00.033353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.319 [2024-07-15 13:02:00.033532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.319 [2024-07-15 13:02:00.033711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.319 [2024-07-15 13:02:00.033720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.319 [2024-07-15 13:02:00.033727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.319 [2024-07-15 13:02:00.036557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 [2024-07-15 13:02:00.046085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.319 [2024-07-15 13:02:00.046399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-15 13:02:00.046417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-15 13:02:00.046425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.319 [2024-07-15 13:02:00.046602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.319 [2024-07-15 13:02:00.046781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.319 [2024-07-15 13:02:00.046791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.319 [2024-07-15 13:02:00.046797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.319 [2024-07-15 13:02:00.049629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.319 [2024-07-15 13:02:00.059279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.319 [2024-07-15 13:02:00.059669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.319 [2024-07-15 13:02:00.059691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.319 [2024-07-15 13:02:00.059700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.319 [2024-07-15 13:02:00.059923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.319 [2024-07-15 13:02:00.060143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.060154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.060162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.063546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.072305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.072585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.072603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.072612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.072787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.072962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.072971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.072978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.075788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.085425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.085784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.085801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.085809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.085986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.086165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.086175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.086181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.089011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.098508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.098935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.098978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.099000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.099592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.100041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.100051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.100058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.102822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.111469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.111895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.111912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.111919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.112495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.112683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.112693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.112703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.115480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.124514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.124946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.124989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.125011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.125605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.125822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.125831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.125837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.128584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.137475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.137815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.137832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.137839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.138010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.138182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.138191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.138198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.140945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.150568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.151018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.151061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.151083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.151675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.152064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.152081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.152094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.158333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.165481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.166035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.166077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.166099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.166699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.166954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.166966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.166976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.171029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.178438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.178881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.178898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.178906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.179077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.179257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.179266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.179273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.182017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.191416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.191770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.191787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.191794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.191965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.192139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.192148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.192154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.194903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.204453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.204901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.204918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.204925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.205098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.205280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.205291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.205297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.208132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.217439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.217890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.217907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.217915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.218098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.218284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.218294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.218301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.221039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.230435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.230855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.230872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.230880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.231051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.231230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.231240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.231247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.233984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.243377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.243727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.243744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.243751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.243923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.244095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.244105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.244111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.246769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.256422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.256893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.256935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.256958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.257311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.257485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.257495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.257501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.320 [2024-07-15 13:02:00.260240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.320 [2024-07-15 13:02:00.269541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.320 [2024-07-15 13:02:00.269977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.320 [2024-07-15 13:02:00.269995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.320 [2024-07-15 13:02:00.270002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.320 [2024-07-15 13:02:00.270180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.320 [2024-07-15 13:02:00.270365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.320 [2024-07-15 13:02:00.270378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.320 [2024-07-15 13:02:00.270385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.273210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.282561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.282957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.283000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.283023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.283616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.284199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.284245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.284254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.286978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.295566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.296023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.296073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.296096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.296691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.297257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.297267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.297273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.300011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.308649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.309092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.309109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.309116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.309296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.309469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.309478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.309484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.312228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.321662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.322117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.322160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.322182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.322390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.322564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.322573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.322580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.325325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.334711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.335003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.335019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.335027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.335199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.335382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.335392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.335398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.338134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.347752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.348129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.348145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.348152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.348331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.348506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.348516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.348523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.581 [2024-07-15 13:02:00.351269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.581 [2024-07-15 13:02:00.360843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.581 [2024-07-15 13:02:00.361296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.581 [2024-07-15 13:02:00.361313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.581 [2024-07-15 13:02:00.361321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.581 [2024-07-15 13:02:00.361493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.581 [2024-07-15 13:02:00.361666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.581 [2024-07-15 13:02:00.361676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.581 [2024-07-15 13:02:00.361682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.364489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.373874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.374324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.374367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.374390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.374967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.375547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.375557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.375565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.378370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.386953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.387399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.387416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.387423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.387597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.387770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.387780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.387787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.390536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.399914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.400345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.400389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.400411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.400991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.401576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.401586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.401592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.404334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.412972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.413392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.413409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.413416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.413588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.413762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.413772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.413778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.416527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.426135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.426481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.426498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.426509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.426682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.426855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.426864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.426870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.429614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.439159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.439573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.439590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.439597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.439760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.439923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.439931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.439937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.442691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.452234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.452696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.452739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.452761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.453353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.453556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.453566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.453572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.456376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.465344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.465713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.465757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.465779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.466369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.466561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.466573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.466579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.469324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.478591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.478939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.478957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.478965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.479138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.479318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.479328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.479335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.482080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.582 [2024-07-15 13:02:00.491584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.582 [2024-07-15 13:02:00.492000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.582 [2024-07-15 13:02:00.492017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.582 [2024-07-15 13:02:00.492024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.582 [2024-07-15 13:02:00.492187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.582 [2024-07-15 13:02:00.492358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.582 [2024-07-15 13:02:00.492367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.582 [2024-07-15 13:02:00.492373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.582 [2024-07-15 13:02:00.494966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.583 [2024-07-15 13:02:00.504437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.583 [2024-07-15 13:02:00.504886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.583 [2024-07-15 13:02:00.504929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.583 [2024-07-15 13:02:00.504952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.583 [2024-07-15 13:02:00.505543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.583 [2024-07-15 13:02:00.505933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.583 [2024-07-15 13:02:00.505951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.583 [2024-07-15 13:02:00.505964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.583 [2024-07-15 13:02:00.512197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.583 [2024-07-15 13:02:00.519618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.583 [2024-07-15 13:02:00.520061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.583 [2024-07-15 13:02:00.520082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.583 [2024-07-15 13:02:00.520092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.583 [2024-07-15 13:02:00.520351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.583 [2024-07-15 13:02:00.520607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.583 [2024-07-15 13:02:00.520618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.583 [2024-07-15 13:02:00.520628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.583 [2024-07-15 13:02:00.524695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.583 [2024-07-15 13:02:00.532651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.583 [2024-07-15 13:02:00.533077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.583 [2024-07-15 13:02:00.533120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.583 [2024-07-15 13:02:00.533143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.843 [2024-07-15 13:02:00.533728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.843 [2024-07-15 13:02:00.533903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.843 [2024-07-15 13:02:00.533914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.843 [2024-07-15 13:02:00.533920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.843 [2024-07-15 13:02:00.536642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.843 [2024-07-15 13:02:00.545548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.843 [2024-07-15 13:02:00.545986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.843 [2024-07-15 13:02:00.546029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.843 [2024-07-15 13:02:00.546051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.843 [2024-07-15 13:02:00.546647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.843 [2024-07-15 13:02:00.547018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.843 [2024-07-15 13:02:00.547027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.843 [2024-07-15 13:02:00.547033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.843 [2024-07-15 13:02:00.549624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.843 [2024-07-15 13:02:00.558342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.843 [2024-07-15 13:02:00.558776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.843 [2024-07-15 13:02:00.558792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.843 [2024-07-15 13:02:00.558800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.843 [2024-07-15 13:02:00.558970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.843 [2024-07-15 13:02:00.559134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.843 [2024-07-15 13:02:00.559143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.843 [2024-07-15 13:02:00.559149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.843 [2024-07-15 13:02:00.561746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.843 [2024-07-15 13:02:00.571171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.843 [2024-07-15 13:02:00.571624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.843 [2024-07-15 13:02:00.571667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.843 [2024-07-15 13:02:00.571689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.843 [2024-07-15 13:02:00.572143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.843 [2024-07-15 13:02:00.572312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.843 [2024-07-15 13:02:00.572322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.572328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.574919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.584094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.584513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.584530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.584537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.584699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.584862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.584871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.584877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.587474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.596962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.597392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.597408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.597415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.597579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.597743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.597752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.597761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.600360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.609778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.610243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.610286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.610309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.610832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.610997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.611006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.611012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.613694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.622623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.622952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.622968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.622975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.623137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.623312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.623321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.623327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.625920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.635557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.635993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.636010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.636016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.636179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.636349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.636358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.636364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.638956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.648435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.648804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.648854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.648877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.649363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.649548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.649557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.649563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.652152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.661336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.661782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.661825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.661848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.662274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.662438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.662447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.662453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.665045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.674235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.674670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.674687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.674694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.674857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.675021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.675030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.675036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.677638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.687130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.687498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.687515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.687522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.687685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.687851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.687861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.687867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.690462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.699957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.700367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.700383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.700391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.700553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.700716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.700725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.700732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.844 [2024-07-15 13:02:00.703332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.844 [2024-07-15 13:02:00.712766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.844 [2024-07-15 13:02:00.713199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.844 [2024-07-15 13:02:00.713216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.844 [2024-07-15 13:02:00.713223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.844 [2024-07-15 13:02:00.713402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.844 [2024-07-15 13:02:00.713574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.844 [2024-07-15 13:02:00.713583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.844 [2024-07-15 13:02:00.713590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.716382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.845 [2024-07-15 13:02:00.725750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.845 [2024-07-15 13:02:00.726171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.845 [2024-07-15 13:02:00.726215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.845 [2024-07-15 13:02:00.726254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.845 [2024-07-15 13:02:00.726833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.845 [2024-07-15 13:02:00.727348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.845 [2024-07-15 13:02:00.727359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.845 [2024-07-15 13:02:00.727365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.730112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.845 [2024-07-15 13:02:00.738880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.845 [2024-07-15 13:02:00.739325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.845 [2024-07-15 13:02:00.739344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.845 [2024-07-15 13:02:00.739352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.845 [2024-07-15 13:02:00.739528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.845 [2024-07-15 13:02:00.739693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.845 [2024-07-15 13:02:00.739702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.845 [2024-07-15 13:02:00.739708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.742309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.845 [2024-07-15 13:02:00.751799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.845 [2024-07-15 13:02:00.752241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.845 [2024-07-15 13:02:00.752259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.845 [2024-07-15 13:02:00.752266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.845 [2024-07-15 13:02:00.752429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.845 [2024-07-15 13:02:00.752593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.845 [2024-07-15 13:02:00.752602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.845 [2024-07-15 13:02:00.752608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.755203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.845 [2024-07-15 13:02:00.764697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.845 [2024-07-15 13:02:00.765116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.845 [2024-07-15 13:02:00.765160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.845 [2024-07-15 13:02:00.765182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.845 [2024-07-15 13:02:00.765646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.845 [2024-07-15 13:02:00.765811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.845 [2024-07-15 13:02:00.765821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.845 [2024-07-15 13:02:00.765827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.768421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.845 [2024-07-15 13:02:00.777591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.845 [2024-07-15 13:02:00.778027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.845 [2024-07-15 13:02:00.778044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.845 [2024-07-15 13:02:00.778054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.845 [2024-07-15 13:02:00.778217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.845 [2024-07-15 13:02:00.778388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.845 [2024-07-15 13:02:00.778398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.845 [2024-07-15 13:02:00.778404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.780995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.845 [2024-07-15 13:02:00.790473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.845 [2024-07-15 13:02:00.790884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.845 [2024-07-15 13:02:00.790900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:29.845 [2024-07-15 13:02:00.790906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:29.845 [2024-07-15 13:02:00.791068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:29.845 [2024-07-15 13:02:00.791240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:29.845 [2024-07-15 13:02:00.791250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:29.845 [2024-07-15 13:02:00.791273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.845 [2024-07-15 13:02:00.794014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.803583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.804028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.804072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.804093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.804570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.804735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.804745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.804753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.807414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.816460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.816828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.816844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.816851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.817014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.817177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.817189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.817195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.819793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.829283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.829648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.829666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.829673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.829835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.829999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.830008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.830014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.832613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.842093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.842532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.842556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.842719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.842883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.842893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.842898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.845499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.854979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.855430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.855475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.855497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.856075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.856670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.856679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.856686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.859283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.867797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.868215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.868237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.868244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.868407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.868569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.868580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.868586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.871183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.880634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.881069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.881086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.881094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.107 [2024-07-15 13:02:00.881264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.107 [2024-07-15 13:02:00.881427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.107 [2024-07-15 13:02:00.881437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.107 [2024-07-15 13:02:00.881443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.107 [2024-07-15 13:02:00.884038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.107 [2024-07-15 13:02:00.893539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.107 [2024-07-15 13:02:00.893896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.107 [2024-07-15 13:02:00.893939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.107 [2024-07-15 13:02:00.893962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.894528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.894703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.894713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.894720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.897363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.906385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.906751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.906768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.906779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.906953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.907127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.907136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.907143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.909789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.919276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.919694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.919744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.919767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.920359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.920890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.920899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.920905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.923602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.932143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.932570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.932614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.932637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.933049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.933213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.933222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.933235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.935831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.945093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.945457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.945486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.945494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.945658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.945821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.945833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.945840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.948441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.957929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.958380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.958425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.958449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.958909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.959074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.959083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.959089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.961683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.970813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.971259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.971276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.971284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.971464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.971628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.971638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.971644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.974479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.983790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.984238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.984285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.984306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.984742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.984916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.984925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.984932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:00.987674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:00.996728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:00.997142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:00.997184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:00.997207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:00.997799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:00.997964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:00.997971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:00.997977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:01.000569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:01.009684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:01.010136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:01.010180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:01.010202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:01.010751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:01.010915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:01.010924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:01.010930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:01.013524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:01.022553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:01.022981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:01.022998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:01.023005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.108 [2024-07-15 13:02:01.023168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.108 [2024-07-15 13:02:01.023343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.108 [2024-07-15 13:02:01.023353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.108 [2024-07-15 13:02:01.023359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.108 [2024-07-15 13:02:01.025949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.108 [2024-07-15 13:02:01.035437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.108 [2024-07-15 13:02:01.035879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.108 [2024-07-15 13:02:01.035896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.108 [2024-07-15 13:02:01.035903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.109 [2024-07-15 13:02:01.036069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.109 [2024-07-15 13:02:01.036240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.109 [2024-07-15 13:02:01.036249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.109 [2024-07-15 13:02:01.036255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.109 [2024-07-15 13:02:01.038848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.109 [2024-07-15 13:02:01.048358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.109 [2024-07-15 13:02:01.048813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.109 [2024-07-15 13:02:01.048855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.109 [2024-07-15 13:02:01.048876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.109 [2024-07-15 13:02:01.049310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.109 [2024-07-15 13:02:01.049496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.109 [2024-07-15 13:02:01.049505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.109 [2024-07-15 13:02:01.049512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.109 [2024-07-15 13:02:01.052107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.370 [2024-07-15 13:02:01.061449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.370 [2024-07-15 13:02:01.061907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.370 [2024-07-15 13:02:01.061949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.370 [2024-07-15 13:02:01.061971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.370 [2024-07-15 13:02:01.062564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.370 [2024-07-15 13:02:01.063159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.370 [2024-07-15 13:02:01.063169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.370 [2024-07-15 13:02:01.063175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.370 [2024-07-15 13:02:01.065920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.074283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.074729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.074772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.074793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.075235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.075400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.075410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.075421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.078014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.087208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.087640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.087684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.087706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.088185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.088355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.088365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.088371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.090962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.100150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.100496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.100514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.100521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.100683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.100847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.100856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.100862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.103464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.113063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.113416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.113433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.113440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.113603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.113765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.113774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.113780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.116377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.125988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.126433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.126507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.127015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.127180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.127189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.127195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.129798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.139061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.139378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.139396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.139403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.139580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.139759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.139770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.139777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.142652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.152160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.152589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.152606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.152614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.152786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.152959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.152969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.152976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.155614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.165071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.165444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.165487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.165510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.166089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.166532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.166542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.166548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.169143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.177876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.178249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.178267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.178274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.178437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.178603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.178613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.178619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.181213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.190756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.191125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.191141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.191149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.191318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.191483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.191492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.371 [2024-07-15 13:02:01.191498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.371 [2024-07-15 13:02:01.194098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.371 [2024-07-15 13:02:01.203609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.371 [2024-07-15 13:02:01.203976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.371 [2024-07-15 13:02:01.204018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.371 [2024-07-15 13:02:01.204040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.371 [2024-07-15 13:02:01.204528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.371 [2024-07-15 13:02:01.204693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.371 [2024-07-15 13:02:01.204702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.204709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.207344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.216467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.216883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.216900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.216907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.217071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.217239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.217250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.217256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.219854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.229261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.229624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.229642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.229649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.229820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.229995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.230004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.230012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.232857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.242311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.242766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.242808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.242831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.243291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.243465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.243476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.243482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.246237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.255241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.255627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.255670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.255700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.256302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.256476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.256487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.256493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.259156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.268151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.268495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.268512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.268519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.268681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.268847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.268856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.268862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.271466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.280972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.281348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.281392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.281415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.281993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.282585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.282610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.282630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.288889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.295992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.296389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.296412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.296422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.296676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.296931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.296948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.296957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.301026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.309030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.309482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.309526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.309549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.310127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.310721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.310753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.310759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.372 [2024-07-15 13:02:01.313467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.372 [2024-07-15 13:02:01.322004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.372 [2024-07-15 13:02:01.322363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.372 [2024-07-15 13:02:01.322381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.372 [2024-07-15 13:02:01.322388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.372 [2024-07-15 13:02:01.322564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.372 [2024-07-15 13:02:01.322729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.372 [2024-07-15 13:02:01.322739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.372 [2024-07-15 13:02:01.322747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.325519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.334797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.335163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.335179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.335186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.335354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.335518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.335528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.335535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.338133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.347648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.348005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.348022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.348029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.348192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.348363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.348373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.348379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.350979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.360524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.360835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.360860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.361032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.361206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.361217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.361223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.363890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.373450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.373741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.373757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.373765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.373926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.374090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.374099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.374105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.376709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.386385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.386701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.386744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.386767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.387367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.387949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.387974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.387995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.390596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.399183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.399577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.399594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.399601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.399764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.399928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.399937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.399943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.402540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.411993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.412416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.412460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.412482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.413002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.413166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.413175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.413182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.415781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.424832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.425249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.425267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.425274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.425436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.425601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.634 [2024-07-15 13:02:01.425610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.634 [2024-07-15 13:02:01.425619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.634 [2024-07-15 13:02:01.428210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.634 [2024-07-15 13:02:01.437713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.634 [2024-07-15 13:02:01.438150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.634 [2024-07-15 13:02:01.438193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.634 [2024-07-15 13:02:01.438216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.634 [2024-07-15 13:02:01.438810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.634 [2024-07-15 13:02:01.439013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.439022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.439029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.441720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.450794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.451204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.451259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.451283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.451861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.452296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.452306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.452315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.455061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.463607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.464078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.464122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.464145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.464688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.465079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.465096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.465111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.471362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.479048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.479532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.479590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.479613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.480176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.480438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.480452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.480462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.484537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.492189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.492601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.492619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.492627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.492799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.492972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.492982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.492988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.495738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.505144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.505498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.505515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.505522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.505694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.505869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.505878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.505885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.508635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.517986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.518415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.518432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.518439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.518607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.518770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.518779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.518785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.521399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.530810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.531267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.531310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.531331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.531911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.532176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.532186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.532192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.534784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.543653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.544088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.544104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.544111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.544280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.544444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.544453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.544459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.547050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.556533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.556979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.557021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.557043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.635 [2024-07-15 13:02:01.557405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.635 [2024-07-15 13:02:01.557570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.635 [2024-07-15 13:02:01.557579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.635 [2024-07-15 13:02:01.557589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.635 [2024-07-15 13:02:01.560180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.635 [2024-07-15 13:02:01.569545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.635 [2024-07-15 13:02:01.570007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.635 [2024-07-15 13:02:01.570050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.635 [2024-07-15 13:02:01.570071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.636 [2024-07-15 13:02:01.570491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.636 [2024-07-15 13:02:01.570656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.636 [2024-07-15 13:02:01.570665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.636 [2024-07-15 13:02:01.570671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.636 [2024-07-15 13:02:01.573269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.636 [2024-07-15 13:02:01.582565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.636 [2024-07-15 13:02:01.583020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.636 [2024-07-15 13:02:01.583063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.636 [2024-07-15 13:02:01.583087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.636 [2024-07-15 13:02:01.583447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.636 [2024-07-15 13:02:01.583622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.636 [2024-07-15 13:02:01.583632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.636 [2024-07-15 13:02:01.583638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.636 [2024-07-15 13:02:01.586405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.595477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.595916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.595934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.595941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.596105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.596277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.596287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.596294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.598882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.608361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.608713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.608732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.608739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.608901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.609065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.609074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.609080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.611676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.621148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.621581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.621598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.621605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.621768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.621930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.621939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.621945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.624576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.634074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.634438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.634455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.634461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.634624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.634788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.634798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.634803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.637403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.646890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.647328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.647373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.647395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.647973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.648587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.648596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.648602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.651189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.659854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.660269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.660286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.660293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.660456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.660618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.660627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.660634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.663231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.672701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.673157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.673198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.673220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.673710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.673874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.673883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.673890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.676480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1869458 Killed "${NVMF_APP[@]}" "$@" 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.897 [2024-07-15 13:02:01.685663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.897 [2024-07-15 13:02:01.686105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.686122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.686130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.686312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.686485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.686494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.686501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.689311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1870863 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1870863 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1870863 ']' 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.897 13:02:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.897 [2024-07-15 13:02:01.698828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.699204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.699221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.699235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.699413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.699591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.699601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.699607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.702438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.711969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.712420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.712437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.712446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.897 [2024-07-15 13:02:01.712627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.897 [2024-07-15 13:02:01.712806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.897 [2024-07-15 13:02:01.712815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.897 [2024-07-15 13:02:01.712821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.897 [2024-07-15 13:02:01.715651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.897 [2024-07-15 13:02:01.725021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.897 [2024-07-15 13:02:01.725461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.897 [2024-07-15 13:02:01.725479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.897 [2024-07-15 13:02:01.725486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.725663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.725841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.725849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.725856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.728686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.738125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.738465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.738483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.738490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.738668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.738846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.738856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.738863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.739886] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:30.898 [2024-07-15 13:02:01.739929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.898 [2024-07-15 13:02:01.741690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.751211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.751665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.751683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.751691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.751869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.752047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.752057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.752064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.754891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.764394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.764788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.764806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.764814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.764992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.765170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.765180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.765186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.898 [2024-07-15 13:02:01.768013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.777543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.777995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.778013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.778021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.778198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.778382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.778392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.778399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.781219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.790728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.791178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.791195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.791203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.791390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.791573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.791583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.791589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.794337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.803724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.804171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.804188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.804195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.804378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.804552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.804561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.804568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.807308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.810303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.898 [2024-07-15 13:02:01.816763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.817123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.817140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.817148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.817325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.817500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.817509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.817516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.820265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.829726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.830092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.830110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.830117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.830295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.830469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.830478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.830486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.833233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.898 [2024-07-15 13:02:01.842778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:30.898 [2024-07-15 13:02:01.843118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.898 [2024-07-15 13:02:01.843135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:30.898 [2024-07-15 13:02:01.843143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:30.898 [2024-07-15 13:02:01.843321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:30.898 [2024-07-15 13:02:01.843495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:30.898 [2024-07-15 13:02:01.843511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:30.898 [2024-07-15 13:02:01.843518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:30.898 [2024-07-15 13:02:01.846345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.855866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.856243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.856262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.856272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.856446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.856621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.856630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.856637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.859389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.868974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.869408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.869427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.869435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.869614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.869793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.869803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.869811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.872642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.882158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.882625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.882643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.882650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.882828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.883006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.883015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.883022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.885851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.890418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.160 [2024-07-15 13:02:01.890445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.160 [2024-07-15 13:02:01.890451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.160 [2024-07-15 13:02:01.890458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.160 [2024-07-15 13:02:01.890463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.160 [2024-07-15 13:02:01.890518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.160 [2024-07-15 13:02:01.890626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.160 [2024-07-15 13:02:01.890627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.160 [2024-07-15 13:02:01.895281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.895745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.895763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.895771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.895949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.896130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.896139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.896146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.898979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.908338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.908816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.908836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.908844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.909022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.909202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.909212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.909220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.912048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.921404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.921784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.921805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.921813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.921991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.922170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.922185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.922192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.925030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.934553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.934940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.934959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.934967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.935146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.935329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.935339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.935346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.938172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.947689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.948152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.948171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.948179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.948363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.948544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.948555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.948564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.951392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.960752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.160 [2024-07-15 13:02:01.961126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.160 [2024-07-15 13:02:01.961143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.160 [2024-07-15 13:02:01.961151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.160 [2024-07-15 13:02:01.961334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.160 [2024-07-15 13:02:01.961517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.160 [2024-07-15 13:02:01.961528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.160 [2024-07-15 13:02:01.961536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.160 [2024-07-15 13:02:01.964367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.160 [2024-07-15 13:02:01.973882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:01.974239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:01.974256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:01.974264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:01.974442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:01.974620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:01.974630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:01.974637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:01.977462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:01.986971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:01.987343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:01.987360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:01.987368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:01.987546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:01.987725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:01.987734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:01.987741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:01.990566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.000078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.000439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.000457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.000465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.000642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.000821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.000830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.000836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.003681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.013190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.013615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.013634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.013642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.013823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.014002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.014012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.014018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.016843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.026363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.026809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.026826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.026834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.027012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.027190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.027200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.027207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.030032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.039547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.039999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.040015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.040023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.040200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.040384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.040393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.040400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.043217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.052737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.053188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.053206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.053213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.053393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.053572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.053582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.053593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.056420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.065931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.066318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.066335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.066342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.066519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.066696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.066706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.066712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.069544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.079057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.079446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.079463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.079472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.079648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.079826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.079836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.079842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.082668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.092179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.092572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.092589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.092596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.092774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.092952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.092962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.092969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.095795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.161 [2024-07-15 13:02:02.105305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.161 [2024-07-15 13:02:02.105760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.161 [2024-07-15 13:02:02.105776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.161 [2024-07-15 13:02:02.105784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.161 [2024-07-15 13:02:02.105961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.161 [2024-07-15 13:02:02.106141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.161 [2024-07-15 13:02:02.106151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.161 [2024-07-15 13:02:02.106157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.161 [2024-07-15 13:02:02.108989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.422 [2024-07-15 13:02:02.118492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.422 [2024-07-15 13:02:02.118812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.422 [2024-07-15 13:02:02.118830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.422 [2024-07-15 13:02:02.118837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.422 [2024-07-15 13:02:02.119015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.422 [2024-07-15 13:02:02.119193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.422 [2024-07-15 13:02:02.119202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.422 [2024-07-15 13:02:02.119209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.422 [2024-07-15 13:02:02.122036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.422 [2024-07-15 13:02:02.131547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.422 [2024-07-15 13:02:02.132004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.422 [2024-07-15 13:02:02.132021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.422 [2024-07-15 13:02:02.132028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.422 [2024-07-15 13:02:02.132206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.422 [2024-07-15 13:02:02.132390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.422 [2024-07-15 13:02:02.132400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.422 [2024-07-15 13:02:02.132407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.422 [2024-07-15 13:02:02.135229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.422 [2024-07-15 13:02:02.144739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.422 [2024-07-15 13:02:02.145193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.422 [2024-07-15 13:02:02.145210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.422 [2024-07-15 13:02:02.145217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.422 [2024-07-15 13:02:02.145398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.422 [2024-07-15 13:02:02.145580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.422 [2024-07-15 13:02:02.145591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.422 [2024-07-15 13:02:02.145598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.422 [2024-07-15 13:02:02.148422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.422 [2024-07-15 13:02:02.157925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.422 [2024-07-15 13:02:02.158306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.422 [2024-07-15 13:02:02.158324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.422 [2024-07-15 13:02:02.158332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.422 [2024-07-15 13:02:02.158509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.422 [2024-07-15 13:02:02.158687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.422 [2024-07-15 13:02:02.158697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.422 [2024-07-15 13:02:02.158703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.422 [2024-07-15 13:02:02.161528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.171045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.171501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.171518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.171526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.171704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.171882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.171892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.171898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.174732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.184242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.184619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.184636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.184644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.184821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.184999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.185009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.185016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.187845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.197354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.197783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.197800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.197808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.197986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.198164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.198174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.198181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.201006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.210544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.210993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.211009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.211017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.211194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.211378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.211388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.211395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.214217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.223740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.224215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.224235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.224243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.224420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.224600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.224610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.224616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.227445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.236790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.237242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.237263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.237270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.237447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.237626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.237635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.237642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.240466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.249826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.250111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.250128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.250136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.250316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.250494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.250503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.250509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.253336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.263017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.263472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.263490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.263497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.263675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.263853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.263863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.263869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.266696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.276208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.276635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.276652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.276660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.276837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.277019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.277029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.277037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.279865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.289376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.289806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.289823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.289831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.290009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.290187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.290197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.290203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.423 [2024-07-15 13:02:02.293035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.423 [2024-07-15 13:02:02.302549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.423 [2024-07-15 13:02:02.302899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.423 [2024-07-15 13:02:02.302916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.423 [2024-07-15 13:02:02.302924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.423 [2024-07-15 13:02:02.303102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.423 [2024-07-15 13:02:02.303284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.423 [2024-07-15 13:02:02.303294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.423 [2024-07-15 13:02:02.303301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.424 [2024-07-15 13:02:02.306123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.424 [2024-07-15 13:02:02.315658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.424 [2024-07-15 13:02:02.316087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.424 [2024-07-15 13:02:02.316104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.424 [2024-07-15 13:02:02.316111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.424 [2024-07-15 13:02:02.316292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.424 [2024-07-15 13:02:02.316470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.424 [2024-07-15 13:02:02.316480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.424 [2024-07-15 13:02:02.316486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.424 [2024-07-15 13:02:02.319312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.424 [2024-07-15 13:02:02.328824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.424 [2024-07-15 13:02:02.329201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.424 [2024-07-15 13:02:02.329219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.424 [2024-07-15 13:02:02.329230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.424 [2024-07-15 13:02:02.329408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.424 [2024-07-15 13:02:02.329588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.424 [2024-07-15 13:02:02.329598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.424 [2024-07-15 13:02:02.329605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.424 [2024-07-15 13:02:02.332427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.424 [2024-07-15 13:02:02.341939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.424 [2024-07-15 13:02:02.342347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.424 [2024-07-15 13:02:02.342365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.424 [2024-07-15 13:02:02.342372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.424 [2024-07-15 13:02:02.342550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.424 [2024-07-15 13:02:02.342728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.424 [2024-07-15 13:02:02.342738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.424 [2024-07-15 13:02:02.342744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.424 [2024-07-15 13:02:02.345572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.424 [2024-07-15 13:02:02.355079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.424 [2024-07-15 13:02:02.355460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.424 [2024-07-15 13:02:02.355477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.424 [2024-07-15 13:02:02.355485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.424 [2024-07-15 13:02:02.355662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.424 [2024-07-15 13:02:02.355840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.424 [2024-07-15 13:02:02.355849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.424 [2024-07-15 13:02:02.355855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.424 [2024-07-15 13:02:02.358685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.424 [2024-07-15 13:02:02.368201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.424 [2024-07-15 13:02:02.368654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.424 [2024-07-15 13:02:02.368673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.424 [2024-07-15 13:02:02.368685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.424 [2024-07-15 13:02:02.368865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.424 [2024-07-15 13:02:02.369043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.424 [2024-07-15 13:02:02.369053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.424 [2024-07-15 13:02:02.369061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.424 [2024-07-15 13:02:02.371887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.381398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.381830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.381848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.381856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.382033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.382213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.382222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.382235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.385054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.394578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.395028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.395045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.395054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.395235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.395415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.395425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.395431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.398257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.407630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.408084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.408101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.408109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.408292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.408470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.408483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.408489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.411314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.420825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.421173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.421190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.421198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.421381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.421559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.421569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.421575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.424407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.433921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.434368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.434385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.434393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.434570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.434748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.434758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.434764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.437588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.447096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.447557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.447574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.447582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.447760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.447938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.447948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.447955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.450786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.460139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.460464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.460481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.460488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.460664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.460843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.460853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.460859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.463689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.473199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.473680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.473698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.473706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.473884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.474064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.474075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.474082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.476913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.486298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.486759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.486779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.486787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.486965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.487145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.487155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.487163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.489994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.499356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.499738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.499756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.499763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.499947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.500127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.500137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.500143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.502968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.512482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.512908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.512926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.512935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.684 [2024-07-15 13:02:02.513114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.684 [2024-07-15 13:02:02.513299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.684 [2024-07-15 13:02:02.513310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.684 [2024-07-15 13:02:02.513318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.684 [2024-07-15 13:02:02.516143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.684 [2024-07-15 13:02:02.525663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.684 [2024-07-15 13:02:02.525968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.684 [2024-07-15 13:02:02.525986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.684 [2024-07-15 13:02:02.525993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.526171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.526354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.526364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.526370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.529192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.538711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.539089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.539106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.539113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.539296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.539477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.539487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.539497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.542328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.551843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.552216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.552238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.552248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.552427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.552606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.552615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.552622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.555449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.685 [2024-07-15 13:02:02.564978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.565368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.565388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.565396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.565572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.565751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.565761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.565768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.568603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.578135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.578449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.578468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.578476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.578653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.578832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.578842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.578853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.581683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.591197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.591564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.591584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.591594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.685 [2024-07-15 13:02:02.591771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.591951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.591962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.591970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.685 [2024-07-15 13:02:02.594801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.598218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.685 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.685 [2024-07-15 13:02:02.604296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.604660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.604677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.604685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.604861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.605041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.605050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.605058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.607892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.617415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.617839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.617856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.617864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.618045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.618223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.618238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.618244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.621067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.685 [2024-07-15 13:02:02.630616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.685 [2024-07-15 13:02:02.631067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.685 [2024-07-15 13:02:02.631084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.685 [2024-07-15 13:02:02.631092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.685 [2024-07-15 13:02:02.631275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.685 [2024-07-15 13:02:02.631453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.685 [2024-07-15 13:02:02.631463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.685 [2024-07-15 13:02:02.631469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.685 [2024-07-15 13:02:02.634307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.945 Malloc0 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.945 [2024-07-15 13:02:02.643673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.945 [2024-07-15 13:02:02.644043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.945 [2024-07-15 13:02:02.644060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.945 [2024-07-15 13:02:02.644068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.945 [2024-07-15 13:02:02.644252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.945 [2024-07-15 13:02:02.644431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.945 [2024-07-15 13:02:02.644441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.945 [2024-07-15 13:02:02.644448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.945 [2024-07-15 13:02:02.647279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.945 [2024-07-15 13:02:02.656797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.945 [2024-07-15 13:02:02.657178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.945 [2024-07-15 13:02:02.657200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1270980 with addr=10.0.0.2, port=4420 00:27:31.945 [2024-07-15 13:02:02.657207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270980 is same with the state(5) to be set 00:27:31.945 [2024-07-15 13:02:02.657391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270980 (9): Bad file descriptor 00:27:31.945 [2024-07-15 13:02:02.657569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.945 [2024-07-15 13:02:02.657578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.945 [2024-07-15 13:02:02.657585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.945 [2024-07-15 13:02:02.660424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.945 [2024-07-15 13:02:02.662219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.945 13:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1869881 00:27:31.945 [2024-07-15 13:02:02.669944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.945 [2024-07-15 13:02:02.704816] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:41.921 00:27:41.921 Latency(us) 00:27:41.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.921 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:41.921 Verification LBA range: start 0x0 length 0x4000 00:27:41.921 Nvme1n1 : 15.01 8091.28 31.61 12670.79 0.00 6145.27 452.34 14816.83 00:27:41.921 =================================================================================================================== 00:27:41.921 Total : 8091.28 31.61 12670.79 0.00 6145.27 452.34 14816.83 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.921 rmmod nvme_tcp 00:27:41.921 rmmod nvme_fabrics 00:27:41.921 rmmod nvme_keyring 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1870863 ']' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1870863 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1870863 ']' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1870863 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1870863 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1870863' 00:27:41.921 killing process with pid 1870863 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1870863 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1870863 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.921 13:02:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.858 13:02:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.858 00:27:42.858 real 0m26.511s 00:27:42.858 user 1m3.210s 00:27:42.858 sys 0m6.433s 00:27:42.858 13:02:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.858 13:02:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.858 ************************************ 00:27:42.858 END TEST nvmf_bdevperf 00:27:42.858 ************************************ 00:27:42.858 13:02:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:42.858 13:02:13 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:42.858 13:02:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:42.858 13:02:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.858 13:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:42.858 ************************************ 00:27:42.858 START TEST nvmf_target_disconnect 00:27:42.858 ************************************ 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:42.858 * Looking for test storage... 00:27:42.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.858 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.117 13:02:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.118 13:02:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:48.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:48.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:48.393 Found net devices under 0000:86:00.0: cvl_0_0 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:48.393 Found net devices under 0000:86:00.1: cvl_0_1 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.393 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.394 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:48.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:27:48.653 00:27:48.653 --- 10.0.0.2 ping statistics --- 00:27:48.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.653 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:48.653 00:27:48.653 --- 10.0.0.1 ping statistics --- 00:27:48.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.653 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:48.653 ************************************ 00:27:48.653 START TEST nvmf_target_disconnect_tc1 00:27:48.653 ************************************ 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:48.653 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.913 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.913 [2024-07-15 13:02:19.668110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.913 [2024-07-15 13:02:19.668146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8be60 with addr=10.0.0.2, port=4420 00:27:48.913 [2024-07-15 13:02:19.668166] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:48.913 [2024-07-15 13:02:19.668174] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:48.913 [2024-07-15 13:02:19.668180] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:48.913 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:48.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:48.913 Initializing NVMe Controllers 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.913 00:27:48.913 real 0m0.113s 00:27:48.913 user 0m0.051s 00:27:48.913 sys 0m0.061s 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.913 ************************************ 00:27:48.913 END TEST nvmf_target_disconnect_tc1 00:27:48.913 ************************************ 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:48.913 ************************************ 00:27:48.913 START TEST nvmf_target_disconnect_tc2 00:27:48.913 ************************************ 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1875811 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1875811 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1875811 ']' 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.913 13:02:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.913 [2024-07-15 13:02:19.796406] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:48.913 [2024-07-15 13:02:19.796444] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.913 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.913 [2024-07-15 13:02:19.865298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.172 [2024-07-15 13:02:19.945408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.172 [2024-07-15 13:02:19.945448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.172 [2024-07-15 13:02:19.945456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.172 [2024-07-15 13:02:19.945462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.172 [2024-07-15 13:02:19.945467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.172 [2024-07-15 13:02:19.945574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:49.172 [2024-07-15 13:02:19.945717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:49.172 [2024-07-15 13:02:19.945826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.172 [2024-07-15 13:02:19.945827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.742 Malloc0 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.742 [2024-07-15 13:02:20.669325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.742 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.002 [2024-07-15 13:02:20.701583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1876059 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:50.002 13:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.002 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.949 13:02:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1875811 00:27:51.949 13:02:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 [2024-07-15 13:02:22.728906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 [2024-07-15 13:02:22.729109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 [2024-07-15 13:02:22.729307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Write completed with error (sct=0, sc=8) 00:27:51.949 starting I/O failed 00:27:51.949 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Read completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 Write completed with error (sct=0, sc=8) 00:27:51.950 starting I/O failed 00:27:51.950 [2024-07-15 13:02:22.729500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.950 [2024-07-15 13:02:22.729750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.729767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.730038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.730049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.730279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.730310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.730468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.730500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.730765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.730796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.731086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.731117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.731336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.731368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.731616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.731665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.731925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.731958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.732166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.732197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.732413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.732425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.732556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.732587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.732804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.732838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.733043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.733075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.733204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.733216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.733374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.733407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.733569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.733601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.733900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.733933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.734126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.734139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.734220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.734422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.734453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.734721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.734752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.735062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.735093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.735385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.735419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.735631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.735662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.735855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.735886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.736134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.736165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.736404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.736437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.736721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.736752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.736920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.736951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.737161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.737193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.737363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.737399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.737681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.737713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.737940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.737971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.950 qpair failed and we were unable to recover it. 00:27:51.950 [2024-07-15 13:02:22.738243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.950 [2024-07-15 13:02:22.738278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.738434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.738465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.738748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.738778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.739073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.739105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.739302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.739334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.739549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.739580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.739726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.739758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.740008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.740038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.740353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.740387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.740591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.740622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.740814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.740846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.741055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.741087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.741351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.741383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.741617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.741649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.741904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.741934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.742196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.742235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.742442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.742473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.742734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.742765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.742991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.743025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.743310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.743342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.743500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.743532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.743822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.743853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.744016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.744050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.744342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.744375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.744533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.744564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.744782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.744814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.745042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.745073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.745287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.745325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.745474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.745505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.745708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.745741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.745961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.745992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.746249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.746281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.746499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.746531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.746696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.746728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.746971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.747002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.747261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.747298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.747539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.747571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.747772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.747804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.748064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.748095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.748364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.748398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.951 [2024-07-15 13:02:22.748559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.951 [2024-07-15 13:02:22.748591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.951 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.748740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.748771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.749029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.749061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.749264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.749297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.749442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.749472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.749621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.749653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.749866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.749897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.750109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.750140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.750310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.750342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.750623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.750654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.750979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.751010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.751307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.751339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.751489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.751521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.751679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.751710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.751994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.752031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.752243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.752275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.752484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.752515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.752714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.752745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.753066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.753096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.753368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.753403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.753682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.753714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.754025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.754058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.754281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.754314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.754524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.754558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.754767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.754798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.755089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.755120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.755287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.755319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.755535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.755566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.755713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.755743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.755964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.755996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.756218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.756258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.756411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.756443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.756637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.756669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.756871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.756902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.757097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.757128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.757365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.757398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.757601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.757633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.757837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.757868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.758126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.758158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.758367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.758401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.758603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.758635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.952 qpair failed and we were unable to recover it. 00:27:51.952 [2024-07-15 13:02:22.758928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.952 [2024-07-15 13:02:22.758959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.759233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.759267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.759421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.759453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.759643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.759674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.759827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.759859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.760119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.760151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.760360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.760392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.760603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.760635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.760767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.760798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.760996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.761027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.761237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.761270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.761500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.761531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.761792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.761823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.762058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.762089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.762258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.762291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.762494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.762526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.762738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.762770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.763013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.763044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.763189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.763221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.763398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.763432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.763693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.763725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.764033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.764066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.764280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.764314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.764577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.764610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.764757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.764788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.765017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.765048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.765310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.765343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.765544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.765575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.765850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.765882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.766110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.766141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.766336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.766370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.766582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.953 [2024-07-15 13:02:22.766613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.953 qpair failed and we were unable to recover it. 00:27:51.953 [2024-07-15 13:02:22.766821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.766852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.767064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.767096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.767300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.767332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.767615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.767646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.767844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.767875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.768111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.768142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.768400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.768432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.768752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.768783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.768915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.768946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.769207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.769257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.769489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.769521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.769732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.769763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.769980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.770012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.770300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.770333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.770528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.770560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.770775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.770806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.771093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.771124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.771410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.771442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.771664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.771700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.771943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.771975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.772107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.772138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.772334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.772368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.772589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.772620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.772862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.772894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.773102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.773134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.773394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.773427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.773693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.773725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.774030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.774062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.774254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.774286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.774448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.774479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.774682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.774714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.774969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.775000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.775149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.775180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.775465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.775498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.775767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.776104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.776136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.776329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.776368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.776629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.776660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.776979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.777031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.954 [2024-07-15 13:02:22.777258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.954 [2024-07-15 13:02:22.777291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.954 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.777578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.777611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.777822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.777853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.778005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.778037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.778242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.778276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.778423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.778454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.778681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.778713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.778963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.778995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.779282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.779315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.779509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.779541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.779743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.779775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.780110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.780141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.780352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.780385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.780681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.780712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.781021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.781053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.781330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.781363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.781638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.781670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.781933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.781964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.782223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.782267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.782422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.782455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.782713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.782745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.782900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.783137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.783169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.783382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.783416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.783693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.783730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.783938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.783971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.784246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.784278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.784498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.784531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.784679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.784710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.784967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.784999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.785207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.785249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.785450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.785483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.785679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.785711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.785943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.785975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.786179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.786211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.786418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.786451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.786579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.786611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.786904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.786935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.787202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.787243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.787407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.787440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.787653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.787684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.955 [2024-07-15 13:02:22.787893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.955 [2024-07-15 13:02:22.787925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.955 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.788151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.788183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.788406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.788439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.788587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.788619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.788929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.788960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.789165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.789197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.789422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.789455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.789669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.789701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.789992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.790024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.790168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.790200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.790413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.790445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.790658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.790690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.791059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.791092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.791368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.791402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.791658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.791690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.792014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.792046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.792294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.792327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.792619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.792651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.792816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.792848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.792989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.793021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.793153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.793184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.793411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.793444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.793733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.793765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.793980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.794012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.794180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.794214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.794510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.794543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.794748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.794779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.794979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.795011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.795223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.795267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.795530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.795562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.795824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.795856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.796164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.796198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.796526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.796559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.796888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.796920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.797206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.797251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.797561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.797593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.797819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.797850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.798055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.798087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.798252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.798286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.798551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.798583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.798789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.798821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.798963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.956 [2024-07-15 13:02:22.798994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.956 qpair failed and we were unable to recover it. 00:27:51.956 [2024-07-15 13:02:22.799278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.799313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.799589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.799621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.799813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.799846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.800107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.800139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.800338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.800371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.800567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.800599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.800794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.800826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.800994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.801027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.801240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.801273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.801546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.801583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.801848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.801879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.802107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.802138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.802408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.802442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.802611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.802644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.802960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.802991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.803269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.803302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.803609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.803640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.803850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.803881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.804140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.804172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.804340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.804374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.804570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.804603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.804818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.804851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.805043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.805076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.805348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.805381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.805595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.805627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.805784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.805815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.806061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.806093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.806348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.806382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.806520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.806552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.806818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.806850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.807116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.807149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.807380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.957 [2024-07-15 13:02:22.807413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.957 qpair failed and we were unable to recover it. 00:27:51.957 [2024-07-15 13:02:22.807651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.807683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.808000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.808031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.808165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.808196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.808571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.808647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.808960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.809005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.809262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.809297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.809564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.809597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.809746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.809779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.810063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.810095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.810295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.810330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.810548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.810581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.810789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.810822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.811103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.811135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.811369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.811403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.811670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.811702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.811864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.811896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.812161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.812194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.812423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.812457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.812755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.812788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.813048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.813080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.813396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.813430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.813664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.813697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.813849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.813881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.814033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.814066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.814278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.814312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.814625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.814658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.814975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.815008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.815277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.815310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.815575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.815607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.958 qpair failed and we were unable to recover it. 00:27:51.958 [2024-07-15 13:02:22.815823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.958 [2024-07-15 13:02:22.815855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.815997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.816029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.816298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.816333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.816489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.816522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.816684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.816716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.816860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.816909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.817156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.817188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.817486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.817520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.817685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.817717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.817965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.817997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.818281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.818314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.818550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.818583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.818744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.818776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.819043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.819076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.819398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.819432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.819666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.819704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.819923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.819956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.820169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.820202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.820424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.820458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.820685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.820717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.820980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.821013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.821223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.821264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.821509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.821542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.821768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.821801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.822085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.822116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.822333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.822366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.822580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.822612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.822837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.822868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.823065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.823097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.823337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.823370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.823525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.823556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.823773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.823806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.824055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.824088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.824328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.824361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.824627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.824659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.824891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.824922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.825124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.825157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.825373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.825408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.825619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.825651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.825918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.825950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.826191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.959 [2024-07-15 13:02:22.826223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.959 qpair failed and we were unable to recover it. 00:27:51.959 [2024-07-15 13:02:22.826511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.826543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.826716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.826749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.827037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.827069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.827332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.827365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.827571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.827603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.827801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.827833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.828143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.828175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.828442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.828475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.828696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.828728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.828964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.828996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.829276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.829309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.829597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.829631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.829780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.829814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.830046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.830078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.830362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.830401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.830621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.830654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.830955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.830987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.831272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.831306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.831476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.831508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.831725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.831758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.831986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.832018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.832314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.832348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.832633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.832665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.832829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.832861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.833129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.833161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.833378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.833412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.833648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.833680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.833968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.834000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.834273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.834306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.834530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.834562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.834844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.834876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.835112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.835145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.835355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.835389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.835536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.835569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.835802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.835835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.836051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.836392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.836425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.836630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.836662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.836889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.836921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.837156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.837189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.960 [2024-07-15 13:02:22.837368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.960 [2024-07-15 13:02:22.837401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.960 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.837626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.837659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.837913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.837947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.838179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.838211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.838384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.838417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.838683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.838727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.838969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.839001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.839236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.839269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.839490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.839523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.839692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.839725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.839930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.839963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.840253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.840288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.840525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.840558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.840706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.840740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.841064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.841102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.841310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.841344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.841616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.841651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.841915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.841949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.842151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.842184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.842426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.842460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.842652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.842687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.842968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.843002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.843273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.843307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.843550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.843583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.843761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.843795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.844053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.844086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.844259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.844293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.844468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.844501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.844729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.844762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.844997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.845030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.845185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.845219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.845461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.845495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.845826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.845858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.846126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.846160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.846477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.846511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.846785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.846837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.846991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.847024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.847320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.847354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.847600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.847633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.847799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.847831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.848120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.961 [2024-07-15 13:02:22.848154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.961 qpair failed and we were unable to recover it. 00:27:51.961 [2024-07-15 13:02:22.848419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.848454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.848674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.848707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.848931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.848964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.849183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.849216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.849519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.849553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.849776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.849810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.850110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.850142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.850377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.850411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.850681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.850715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.851030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.851063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.851315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.851349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.851525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.851558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.851727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.851759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.851971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.852010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.852173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.852206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.852357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.852390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.852684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.852717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.852961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.852994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.853160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.853191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.853368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.853401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.853602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.853635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.853867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.853900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.854102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.854135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.854406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.854442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.854713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.854746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.854998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.855032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.855322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.855356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.855566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.855598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.855823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.855856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.856137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.856170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.856475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.856509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.856729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.856762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.857014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.857046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.857276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.857309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.857533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.962 [2024-07-15 13:02:22.857565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.962 qpair failed and we were unable to recover it. 00:27:51.962 [2024-07-15 13:02:22.857787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.857819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.858139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.858171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.858514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.858549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.858758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.858790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.859089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.859123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.859367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.859407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.859705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.859738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.859975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.860008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.860155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.860189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.860336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.860369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.860524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.860557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.860763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.860795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.861052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.861086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.861288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.861321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.861526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.861559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.861838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.861872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.862182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.862215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.862479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.862513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.862686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.862719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.862886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.862919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.863216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.863260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.863494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.863526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.863824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.863856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.863999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.864032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.864309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.864343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.864521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.864554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.864758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.864791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.865041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.865074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.865322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.865356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.865580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.865612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.865830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.865863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.866098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.866132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.866373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.866407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.866636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.866668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.866886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.866919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.867193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.867234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.867454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.867487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.867647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.867679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.867976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.963 [2024-07-15 13:02:22.868009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.963 qpair failed and we were unable to recover it. 00:27:51.963 [2024-07-15 13:02:22.868283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.868317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.868496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.868529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.868751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.868783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.869098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.869132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.869420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.869454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.869724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.869756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.869975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.870018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.870302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.870335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.870627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.870660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.870839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.870871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.871151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.871184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.871424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.871458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.871687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.871719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.872004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.872037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.872199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.872241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.872470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.872503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.872706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.872739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.873024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.873057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.873262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.873297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.873527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.873560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.873802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.873835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.874013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.874046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.874347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.874381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.874680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.874713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.875055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.875088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.875379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.875413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.875715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.875748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.875998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.876030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.876344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.876378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.876613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.876646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.876867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.876901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.877069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.877101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.877257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.877290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.877502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.877535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.877761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.877794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.878023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.878056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.878356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.878390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.878634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.878667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.878840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.878873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.879078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.879111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.964 qpair failed and we were unable to recover it. 00:27:51.964 [2024-07-15 13:02:22.879431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.964 [2024-07-15 13:02:22.879465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.879613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.879646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.879803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.879837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.880087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.880120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.880266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.880298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.880525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.880556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.880829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.880867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.881106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.881140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.881379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.881413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.881597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.881630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.881774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.881806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.882116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.882149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.882434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.882468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.882697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.882729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.882987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.883019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.883270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.883303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.883520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.883553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.883879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.883912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.884131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.884163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.884420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.884453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.884623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.884656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.884865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.884898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.885248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.885282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.885504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.885536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.885689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.885723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.885950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.885982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.886263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.886297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.886464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.886496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.886652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.886685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.886941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.886975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.887245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.887279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.887460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.887493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.887714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.887747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.887923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.887956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.888171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.888206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:51.965 [2024-07-15 13:02:22.888540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.965 [2024-07-15 13:02:22.888574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:51.965 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.888901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.888936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.889162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.889196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.889449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.889482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.889641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.889674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.889936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.889969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.890162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.890195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.890375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.890409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.246 [2024-07-15 13:02:22.890573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.246 [2024-07-15 13:02:22.890607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.246 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.890764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.890796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.890952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.890985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.891186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.891237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.891561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.891595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.891762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.891794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.892094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.892128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.892410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.892443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.892673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.892706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.892949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.892982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.893128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.893161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.893423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.893457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.893660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.893692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.893920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.893953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.894108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.894141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.894439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.894473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.894650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.894683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.894946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.894978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.895189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.895223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.895514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.895547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.895715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.895747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.896047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.896081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.896365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.896399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.896710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.896743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.897039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.897072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.897373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.897408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.897701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.897735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.898032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.898065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.898361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.898395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.898563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.898595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.898918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.898951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.899267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.899301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.899558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.899592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.899897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.899931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.900181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.900214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.900487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.900520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.900726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.900759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.901068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.901101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.901338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.901372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.901619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.901652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.901897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.901930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.247 qpair failed and we were unable to recover it. 00:27:52.247 [2024-07-15 13:02:22.902139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.247 [2024-07-15 13:02:22.902173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.902451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.902485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.902652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.902690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.902967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.903000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.903300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.903333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.903579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.903611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.903955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.903988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.904155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.904189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.904462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.904495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.904720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.904753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.905059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.905092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.905389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.905422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.905634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.905668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.905954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.905988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.906264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.906298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.906517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.906551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.906714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.906747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.907075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.907108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.907330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.907365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.907537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.907569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.907861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.907893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.908196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.908236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.908476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.908509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.908664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.908696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.908990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.909023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.909235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.909270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.909517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.909549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.909787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.909820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.909982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.910014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.910249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.910284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.910450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.910483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.910706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.910739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.910940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.910973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.911128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.911162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.911422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.911455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.911678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.911711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.911920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.911953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.912258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.912292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.912454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.912487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.912632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.912665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.912866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.248 [2024-07-15 13:02:22.912899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.248 qpair failed and we were unable to recover it. 00:27:52.248 [2024-07-15 13:02:22.913191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.913231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.913391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.913430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.913645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.913677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.913926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.913959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.914173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.914206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.914451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.914485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.914708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.914741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.915104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.915137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.915303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.915338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.915508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.915540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.915843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.915875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.916095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.916129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.916357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.916391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.916598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.916631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.916781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.916814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.917123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.917155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.917376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.917411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.917635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.917667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.917822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.917856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.918017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.918050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.918278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.918312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.918516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.918549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.918795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.918827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.919105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.919138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.919347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.919383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.919596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.919857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.919891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.920123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.920155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.920391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.920425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.920598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.920631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.920772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.920804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.920965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.920999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.921302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.921336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.921570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.921603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.921898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.921931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.922177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.922212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.922491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.922525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.922750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.922783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.923029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.923062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.923290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.923324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.923593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.249 [2024-07-15 13:02:22.923626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.249 qpair failed and we were unable to recover it. 00:27:52.249 [2024-07-15 13:02:22.923958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.924002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.924245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.924279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.924447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.924479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.924704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.924737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.924994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.925026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.925249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.925282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.925507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.925540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.925765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.925798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.926099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.926132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.926279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.926314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.926473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.926507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.926736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.926768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.927015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.927047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.927322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.927355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.927581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.927613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.927785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.927818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.927965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.927997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.928200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.928245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.928409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.928442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.928685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.928718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.928939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.928971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.929245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.929279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.929504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.929537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.929703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.929737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.929984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.930016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.930343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.930376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.930615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.930647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.930823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.930856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.931097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.931129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.931405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.931439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.931604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.931638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.931916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.931949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.932167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.932200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.932507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.932540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.932703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.932736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.933037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.933070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.250 qpair failed and we were unable to recover it. 00:27:52.250 [2024-07-15 13:02:22.933288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.250 [2024-07-15 13:02:22.933320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.933576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.933609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.933813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.933846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.934206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.934247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.934493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.934532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.934776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.934810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.935018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.935050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.935279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.935313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.935616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.935648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.935954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.935987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.936219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.936261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.936538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.936570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.936776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.936808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.937052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.937085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.937311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.937345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.937505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.937538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.937710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.937742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.937969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.938004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.938143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.938176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.938440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.938474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.938759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.938792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.939037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.939069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.939294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.939328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.939554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.939586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.939810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.939843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.940171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.940205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.940438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.251 [2024-07-15 13:02:22.940471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.251 qpair failed and we were unable to recover it. 00:27:52.251 [2024-07-15 13:02:22.940636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.940668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.940890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.940922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.941164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.941198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.941503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.941536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.941825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.941858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.942081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.942114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.942335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.942369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.942601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.942634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.942934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.942967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.943172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.943205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.943448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.943482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.943775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.943809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.944006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.944040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.944255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.944289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.944463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.944495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.944665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.944699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.945018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.945051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.945276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.945317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.945565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.945597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.945826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.945858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.946074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.946108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.946342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.946376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.946594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.946627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.946952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.946985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.947204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.947260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.947504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.947536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.947701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.947735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.948017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.948051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.948260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.948294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.948518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.948551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.948703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.948735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.949098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.949131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.949414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.949448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.949621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.949654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.949806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.949838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.950161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.950193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.950444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.950478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.950750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.950783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.951026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.951058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.951294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.951329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.951562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.252 [2024-07-15 13:02:22.951595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.252 qpair failed and we were unable to recover it. 00:27:52.252 [2024-07-15 13:02:22.951802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.951835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.952154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.952188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.952454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.952488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.952808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.952842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.953063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.953095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.953303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.953337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.953617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.953650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.953948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.953982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.954288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.954322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.954467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.954500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.954742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.954775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.955043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.955076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.955298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.955333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.955480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.955513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.955740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.955773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.956018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.956052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.956278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.956317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.956568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.956601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.956895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.956929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.957222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.957265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.957503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.957537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.957703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.957736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.957999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.958033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.958192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.958234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.958458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.958491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.958636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.958669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.958874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.958906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.959268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.959303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.959579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.959611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.959943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.959976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.960257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.253 [2024-07-15 13:02:22.960290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.253 qpair failed and we were unable to recover it. 00:27:52.253 [2024-07-15 13:02:22.960523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.960556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.960880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.960913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.961126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.961158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.961441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.961476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.961771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.961803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.961965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.961999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.962276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.962310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.962485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.962517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.962697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.962729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.962891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.962923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.963197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.963249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.963461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.963495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.963730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.963763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.963975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.964008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.964164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.964196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.964516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.964550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.964838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.964870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.965089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.965121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.965423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.965458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.965766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.965798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.966131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.966165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.966382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.966416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.966570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.966602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.966746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.966778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.967011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.967265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.967305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.967509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.967542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.967863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.967896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.968170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.968203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.968430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.968463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.968679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.968711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.968988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.969021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.969301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.969336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.969634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.969667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.969952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.969985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.970190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.970231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.970450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.970483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.970710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.970743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.970946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.970981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.971222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.971275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.254 [2024-07-15 13:02:22.971435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.254 [2024-07-15 13:02:22.971469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.254 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.971715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.971750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.971973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.972006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.972290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.972326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.972573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.972606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.972832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.972865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.973155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.973188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.973382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.973416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.973633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.973665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.973972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.974004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.974236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.974270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.974489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.974521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.974707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.974740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.974905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.974937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.975094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.975127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.975383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.975417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.975667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.975700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.975926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.975959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.976244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.976278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.976494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.976526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.976803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.976837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.977064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.977097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.977392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.977426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.977654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.977686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.978001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.978034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.978332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.978375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.978658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.978691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.979001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.979035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.979266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.979302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.979454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.979487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.255 [2024-07-15 13:02:22.979757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.255 [2024-07-15 13:02:22.979790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.255 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.980061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.980094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.980411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.980445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.980689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.980723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.980964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.980997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.981155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.981188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.981333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.981367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.981683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.981716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.981998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.982031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.982276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.982310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.982610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.982642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.982963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.982995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.983300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.983335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.983617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.983650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.983995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.984029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.984248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.984282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.984557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.984590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.984893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.984926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.985263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.985296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.985557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.985589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.985855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.985888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.986208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.986248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.986430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.986464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.986695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.986728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.987110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.987143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.987483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.987516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.987787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.987820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.988124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.988157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.988380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.988414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.988660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.988697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.988926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.988960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.989199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.989243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.989596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.989631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.989797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.989829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.256 [2024-07-15 13:02:22.990047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.256 [2024-07-15 13:02:22.990080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.256 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.990246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.990286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.990511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.990543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.990719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.990757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.990953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.990986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.991284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.991319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.991497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.991530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.991750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.991783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.991988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.992020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.992172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.992205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.992509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.992850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.992883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.993169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.993201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.993440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.993479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.993709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.993741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.993960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.993993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.994302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.994336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.994565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.994598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.994823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.994856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.995156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.995189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.995490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.995526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.995823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.995856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.996008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.996042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.996287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.996322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.996561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.996599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.996830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.996863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.997070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.997103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.997384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.997421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.997609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.997643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.997869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.997902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.998172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.998206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.998489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.998523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.998746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.257 [2024-07-15 13:02:22.998778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.257 qpair failed and we were unable to recover it. 00:27:52.257 [2024-07-15 13:02:22.998994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:22.999026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:22.999257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:22.999293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:22.999502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:22.999535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:22.999844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:22.999876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.000218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.000263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.000491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.000525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.000805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.000837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.001148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.001181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.001407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.001442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.001724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.001757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.002102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.002135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.002416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.002451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.002629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.002662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.002911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.002944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.003270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.003306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.003475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.003508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.003670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.003704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.003929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.003963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.004115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.004147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.004362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.004397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.004551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.004584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.004834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.004867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.005152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.005185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.005357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.005393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.005620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.005653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.005927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.005960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.006209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.006255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.006549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.006582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.006755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.006788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.007036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.007071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.007244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.007279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.007497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.007531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.007756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.007789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.008027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.008060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.008426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.258 [2024-07-15 13:02:23.008460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.258 qpair failed and we were unable to recover it. 00:27:52.258 [2024-07-15 13:02:23.008629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.008667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.008875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.009077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.009115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.009320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.009354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.009586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.009618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.009908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.009941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.010184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.010217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.010476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.010509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.010667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.010700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.010856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.010889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.011092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.011125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.011425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.011460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.011707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.011740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.012082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.012116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.013804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.013863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.014206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.014254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.014537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.014571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.014808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.014841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.015049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.015084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.015352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.015386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.015615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.015648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.015972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.016005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.016236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.016272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.016517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.016549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.016694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.016727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.016910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.016943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.017167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.017199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.017528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.017563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.017794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.017827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.018021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.018063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.018222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.018264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.018483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.018519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.018732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.018772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.019099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.019134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.019313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.019349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.019529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.019562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.019711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.019744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.020104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.020137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.020424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.020459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.020761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.259 [2024-07-15 13:02:23.020793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.259 qpair failed and we were unable to recover it. 00:27:52.259 [2024-07-15 13:02:23.021092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.021133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.021299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.021332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.021630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.021665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.021947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.021979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.022280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.022316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.022556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.022589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.022955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.022988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.023274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.023311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.023484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.023517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.023732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.023764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.024102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.024136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.024384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.024418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.024587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.024620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.024835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.024867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.025126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.025166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.025391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.025426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.025683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.025718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.026061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.026099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.026267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.026305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.026537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.026571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.026809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.026842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.027045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.027082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.027303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.027338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.027611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.027643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.027944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.027977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.028220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.028264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.028478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.028512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.028795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.028828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.029158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.029192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.029359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.029392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.029616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.029649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.029802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.029835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.030108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.030142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.260 qpair failed and we were unable to recover it. 00:27:52.260 [2024-07-15 13:02:23.030353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.260 [2024-07-15 13:02:23.030387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.030687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.030720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.031015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.031049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.031347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.031592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.031627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.031763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.031796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.032114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.032148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.032319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.032359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.032637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.032670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.032904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.032937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.033201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.033243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.033460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.033493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.033645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.033678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.033929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.033961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.034202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.034243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.034407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.034441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.034664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.034699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.034929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.034962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.035178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.035211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.035453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.035487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.035641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.035675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.035932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.035966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.036123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.036156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.036431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.036466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.036631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.036664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.036835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.036869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.037085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.037118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.037288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.037322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.037567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.037599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.037804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.037836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.038132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.038166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.038391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.038425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.038581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.038614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.038935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.038968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.039283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.039317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.039542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.039575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.039753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.039786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.040056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.040089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.040355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.040390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.040613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.040647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.261 [2024-07-15 13:02:23.040863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.261 [2024-07-15 13:02:23.040897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.261 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.041051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.041084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.041369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.041402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.041645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.041678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.042019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.042052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.042283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.042316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.042523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.042556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.042822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.042863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.043167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.043200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.043382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.043416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.043642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.043676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.043900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.043932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.044084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.044117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.044336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.044370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.044610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.044644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.044931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.044964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.045134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.045167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.046771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.046827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.047185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.047221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.047477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.047511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.047688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.047722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.047896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.047929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.048104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.048137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.048386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.048421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.048630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.048662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.048838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.048874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.049030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.049063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.049335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.049369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.049526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.049558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.049882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.049917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.050119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.050152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.050380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.050414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.050599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.050633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.050855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.050888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.051123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.051156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.051394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.051428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.051665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.051703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.053310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.053369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.053642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.053677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.053848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.053880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.054087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.054123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.262 [2024-07-15 13:02:23.054410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.262 [2024-07-15 13:02:23.054446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.262 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.054735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.054768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.054994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.055027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.055215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.055293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.055551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.055589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.055873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.055908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.056082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.056483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.056519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.056819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.056853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.057086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.057125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.057444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.057480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.057816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.057850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.058251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.058288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.058525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.058560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.058715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.058753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.059109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.059146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.059370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.059415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.059589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.059622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.059918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.059952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.060252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.060286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.060458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.060491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.060710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.060742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.060978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.061010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.061287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.061320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.061487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.061521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.061796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.061835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.062074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.062108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.062390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.062425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.062650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.062683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.062900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.062933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.063171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.063204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.063549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.063588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.063748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.063782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.064102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.064135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.064409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.064444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.064690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.064726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.065087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.065120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.065352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.065400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.065628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.065671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.065966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.065999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.066330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.066372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.263 qpair failed and we were unable to recover it. 00:27:52.263 [2024-07-15 13:02:23.066614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.263 [2024-07-15 13:02:23.066649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.066977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.067011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.067312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.067350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.067571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.067606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.067783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.067817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.068023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.068062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.068291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.068327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.068527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.068561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.068738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.068772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.069068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.069101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.069341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.069376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.069586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.069619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.069902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.069935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.070093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.070128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.070374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.070410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.070727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.070760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.071000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.071035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.071323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.071359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.071565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.071597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.071781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.071818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.072045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.072078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.072284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.072319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.072491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.072524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.072797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.072831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.073114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.073148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.073381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.073415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.073637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.073672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.073893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.073926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.074174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.074208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.074452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.074486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.074785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.074819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.075109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.075144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.075394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.075428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.075654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.075687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.075916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.076166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.076199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.076446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.076480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.076676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.076708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.076996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.077030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.077335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.077369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.077532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.077567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.264 qpair failed and we were unable to recover it. 00:27:52.264 [2024-07-15 13:02:23.077771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.264 [2024-07-15 13:02:23.077803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.078048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.078083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.078316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.078350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.078576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.078609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.078805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.078847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.079065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.079099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.079323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.079359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.079593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.079625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.079788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.079821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.080131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.080421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.080456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.080623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.080656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.080819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.080852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.081171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.081206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.081381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.081414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.081578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.081612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.081784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.081820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.082088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.082121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.082352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.082386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.082558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.082596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.082753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.082784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.083136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.083169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.083416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.083455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.083667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.083700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.083924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.083957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.084258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.084293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.084501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.084534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.084700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.084733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.084982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.085016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.085244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.085281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.085463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.085496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.085746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.085782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.086040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.086075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.086294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.086332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.086569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.265 [2024-07-15 13:02:23.086602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.265 qpair failed and we were unable to recover it. 00:27:52.265 [2024-07-15 13:02:23.086773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.086806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.086966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.087002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.087251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.087290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.087571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.087606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.087932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.087966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.088266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.088305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.088529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.088562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.088910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.088942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.089183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.089216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.089454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.089494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.089716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.089748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.089975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.090008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.090214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.090254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.090386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.090419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.090587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.090621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.090880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.090913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.091125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.091158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.091389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.091423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.091643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.091676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.091915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.091947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.092185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.092218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.092452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.092488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.092632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.092666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.092951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.092984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.093197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.093241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.093468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.093502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.093745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.093777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.094077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.094110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.094359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.094394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.094562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.094595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.094750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.094783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.094982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.095015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.095243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.095278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.095558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.095591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.095816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.095849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.096075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.096108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.096373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.096423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.096665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.096699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.097032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.097066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.097287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.266 [2024-07-15 13:02:23.097321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.266 qpair failed and we were unable to recover it. 00:27:52.266 [2024-07-15 13:02:23.097616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.097650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.097881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.097914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.098128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.098161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.098443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.098477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.098654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.098687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.099032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.099066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.099335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.099581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.099614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.099824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.099857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.100019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.100058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.100360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.100394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.100630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.100663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.100883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.100917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.101078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.101110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.101350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.101385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.101605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.101639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.101853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.101886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.102023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.102058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.102204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.102247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.102408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.102447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.102691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.102724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.102959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.102991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.103268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.103303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.103518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.103551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.103786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.103819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.104023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.104056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.104285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.104319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.104568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.104601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.104831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.104863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.105016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.105049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.105273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.105307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.105461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.105494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.105711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.105744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.105881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.105914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.106127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.106159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.106420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.106455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.106737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.106816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.107084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.107119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.107380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.107416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.107625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.107659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.267 qpair failed and we were unable to recover it. 00:27:52.267 [2024-07-15 13:02:23.107952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.267 [2024-07-15 13:02:23.107985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.108118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.108311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.108345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.108600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.108632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.108853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.108886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.109175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.109209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.109441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.109476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.109695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.109728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.110001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.110034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.110203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.110253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.110486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.110519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.110739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.110773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.111013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.111046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.111276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.111310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.111535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.111568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.111714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.111747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.111909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.111941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.112167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.112201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.112535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.112569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.112797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.112829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.113028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.113060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.113275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.113310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.113559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.113592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.113800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.113835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.114042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.114076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.114374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.114564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.114598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.114844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.114876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.115030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.115062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.115281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.115315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.115611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.115644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.115888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.115922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.116150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.116182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.116477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.116512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.116787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.116821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.117132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.117165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.117436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.117470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.117642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.117674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.117912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.117945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.118185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.118218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.268 qpair failed and we were unable to recover it. 00:27:52.268 [2024-07-15 13:02:23.118415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.268 [2024-07-15 13:02:23.118448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.118602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.118635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.118944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.118977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.119291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.119326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.119552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.119585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.119789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.119823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.120121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.120154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.120315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.120350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.120577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.120611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.120850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.120895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.121168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.121201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.121381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.121415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.121629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.121661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.121976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.122009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.122237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.122271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.122448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.122481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.122660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.122693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.122849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.122882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.123145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.123178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.123420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.123454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.123678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.123711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.123900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.123933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.124214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.124260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.124444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.124478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.124631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.124665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.124954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.124987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.125299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.125333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.125590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.125623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.125799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.125832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.126046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.126079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.126272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.126307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.126584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.126617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.126791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.126824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.127131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.127164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.269 [2024-07-15 13:02:23.127384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.269 [2024-07-15 13:02:23.127418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.269 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.127568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.127601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.127890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.127923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.128173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.128206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.128465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.128497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.128778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.128810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.129063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.129096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.129336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.129370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.129601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.129635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.129799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.129832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.130007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.130041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.130182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.130215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.130439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.130472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.130713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.130747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.130970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.131003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.131285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.131325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.131577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.131610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.131909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.131942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.132163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.132196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.132356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.132390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.132560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.132594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.132817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.132850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.133102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.133135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.133380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.133415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.133639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.133672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.133963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.133995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.134276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.134311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.134481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.134513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.134689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.134722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.135020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.135054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.135208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.135251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.135406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.135438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.135612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.135645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.135970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.136003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.136272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.136306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.136471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.136504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.136657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.136688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.137051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.137085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.137314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.137348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.137569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.137602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.270 [2024-07-15 13:02:23.137765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.270 [2024-07-15 13:02:23.137798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.270 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.138021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.138054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.138263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.138297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.138511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.138544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.138748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.138782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.139010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.139043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.139284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.139318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.139487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.139519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.139744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.139777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.140131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.140164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.140323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.140357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.140584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.140617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.140888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.140921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.141208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.141252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.141478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.141511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.141677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.141711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.142024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.142057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.142272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.142307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.142480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.142512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.142742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.142775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.142923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.142956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.143121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.143154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.143440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.143475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.143642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.143676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.143884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.143915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.144202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.144243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.144539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.144572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.144789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.144823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.144962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.144996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.145353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.145388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.145595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.145628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.145845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.145877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.146154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.146188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.146503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.146536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.146713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.146746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.147061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.147093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.147267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.147301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.147516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.147549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.147767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.147799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.148068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.271 [2024-07-15 13:02:23.148336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.271 [2024-07-15 13:02:23.148370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.271 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.148596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.148628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.148797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.148835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.149052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.149084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.149370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.149404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.149682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.149714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.149865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.149897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.150114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.150146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.150322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.150355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.150648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.150680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.150958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.150990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.151222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.151264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.151474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.151507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.151729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.151761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.152063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.152096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.152398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.152432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.152607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.152640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.152846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.152879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.153175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.153207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.153457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.153490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.153783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.153816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.154067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.154100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.154400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.154433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.154643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.154676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.154990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.155022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.155255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.155290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.155452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.155486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.155708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.155741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.156042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.156075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.156298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.156332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.156558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.156590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.156863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.156896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.157142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.157174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.157489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.157523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.157693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.157725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.158032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.158064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.158384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.158417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.158692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.158725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.159026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.159059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.159379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.159412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.159618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.159651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.272 qpair failed and we were unable to recover it. 00:27:52.272 [2024-07-15 13:02:23.159965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.272 [2024-07-15 13:02:23.159999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.160151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.160189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.160405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.160439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.160739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.160771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.161090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.161122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.161342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.161376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.161552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.161586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.161877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.161910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.162211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.162251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.162482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.162515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.162669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.162702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.162910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.162942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.163151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.163182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.163468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.163502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.163676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.163708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.163876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.163908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.164071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.164104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.164417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.164451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.164680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.164713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.164877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.164909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.165188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.165221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.165448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.165481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.165720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.165752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.166066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.166098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.166384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.166418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.166567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.166600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.166815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.166847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.167130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.167163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.167340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.167374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.167600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.167633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.167858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.167890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.168111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.168145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.168450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.168483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.168621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.168653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.273 qpair failed and we were unable to recover it. 00:27:52.273 [2024-07-15 13:02:23.168936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.273 [2024-07-15 13:02:23.168968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.169243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.169277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.169441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.169474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.169630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.169662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.169811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.169844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.170136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.170168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.170409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.170444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.170664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.170703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.170991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.171024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.171317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.171351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.171568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.171602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.171828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.171861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.172097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.172130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.172380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.172413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.172582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.172615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.172917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.172951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.173220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.173260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.173487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.173520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.173666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.173699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.174077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.174110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.174346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.174380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.176356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.176415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.176740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.176776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.177029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.177062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.177349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.177384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.177685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.177718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.177893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.177926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.178210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.178274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.178444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.178477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.178646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.178680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.178846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.178879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.179055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.179086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.179331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.179364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.274 [2024-07-15 13:02:23.179644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.274 [2024-07-15 13:02:23.179676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.274 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.179923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.179959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.180186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.180219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.180455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.180488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.180782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.180814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.181106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.181138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.181358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.181393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.181621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.181654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.181818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.181851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.182080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.182112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.182386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.182420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.182644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.182676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.182810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.182842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.183067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.183099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.183314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.183355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.183601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.183634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.183852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.183886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.184071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.184107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.184276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.184310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.184477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.184509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.184762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.184797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.185018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.185051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.185283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.185316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.185496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.185528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.185756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.185790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.186143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.186176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.186439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.186473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.186678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.186710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.187072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.187104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.187323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.187357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.187680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.187713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.187947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.187980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.188281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.188315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.188543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.188576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.188738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.188771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.550 [2024-07-15 13:02:23.188980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.550 [2024-07-15 13:02:23.189012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.550 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.189292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.189325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.189541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.189574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.189730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.189763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.190064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.190096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.190358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.190393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.190673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.190706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.191030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.191063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.191201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.191244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.191521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.191554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.191827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.191859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.192168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.192201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.192488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.192520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.192814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.192847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.193088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.193121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.193406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.193440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.193643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.193676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.193973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.194006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.194282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.194315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.194590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.194628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.194777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.194810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.195080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.195113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.195413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.195446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.195733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.195766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.196067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.196100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.196400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.196433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.196750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.196782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.197055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.197088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.197394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.197428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.197634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.197667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.197799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.197831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.198117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.198150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.198366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.198400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.198705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.198738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.198966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.198999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.199298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.199332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.199574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.199606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.199898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.199931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.200155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.200188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.200393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.200428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.551 [2024-07-15 13:02:23.200674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.551 [2024-07-15 13:02:23.200707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.551 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.200913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.200945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.201254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.201288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.201449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.201481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.201760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.201792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.202095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.202127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.202346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.202381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.202670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.202702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.202919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.202951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.203269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.203303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.203605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.203638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.203926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.203957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.204181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.204214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.204453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.204486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.204778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.204811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.205106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.205139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.205346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.205379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.205678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.205711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.205918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.205950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.206245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.206285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.206511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.206544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.206748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.206780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.207077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.207110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.207410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.207444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.207717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.207750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.208046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.208077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.208370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.208404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.208679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.208711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.208944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.208977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.209181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.209213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.209499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.209532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.209780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.209813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.210103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.210135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.210362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.210397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.210623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.210656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.210806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.210840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.211044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.211077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.211375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.211408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.211696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.211728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.212031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.212066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.212222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.212264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.552 qpair failed and we were unable to recover it. 00:27:52.552 [2024-07-15 13:02:23.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.552 [2024-07-15 13:02:23.212472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.212678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.212711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.212917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.212950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.213152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.213185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.213375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.213408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.213617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.213651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.213925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.213957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.214167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.214200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.214525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.214559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.214808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.214840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.215080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.215113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.215338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.215372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.215672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.215705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.215989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.216022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.216321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.216355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.216593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.216626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.216969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.217003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.217215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.217257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.217576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.217615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.217895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.217928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.218245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.218279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.218503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.218535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.218808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.219104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.219136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.219461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.219496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.219790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.219823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.220051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.220084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.220371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.220406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.220613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.220646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.220892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.220925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.221141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.221174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.221482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.221516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.221816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.221850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.222157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.222190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.222427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.222462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.222622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.222655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.222925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.222958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.223156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.223189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.223496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.223529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.223831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.223863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.224150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.553 [2024-07-15 13:02:23.224181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.553 qpair failed and we were unable to recover it. 00:27:52.553 [2024-07-15 13:02:23.224488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.224522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.224810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.224843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.225048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.225080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.225329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.225363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.225643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.225676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.225949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.225980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.226301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.226335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.226537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.226569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.226846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.226879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.227169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.227202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.227505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.227538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.227826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.227858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.228065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.228097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.228314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.228348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.228554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.228588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.228882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.228915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.229211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.229255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.229469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.229508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.229832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.229864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.230110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.230142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.230387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.230421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.230586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.230619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.230844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.230876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.231104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.231137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.231364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.231399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.231670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.231702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.231974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.232007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.232306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.232339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.232581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.232613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.232746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.232779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.233056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.233088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.233403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.233437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.233660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.233692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.554 [2024-07-15 13:02:23.233988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.554 [2024-07-15 13:02:23.234021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.554 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.234314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.234349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.234520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.234553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.234842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.234873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.235040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.235073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.235348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.235382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.235601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.235633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.235936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.235968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.236170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.236203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.236381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.236414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.236548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.236579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.236856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.236890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.237209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.237252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.237406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.237439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.237657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.237689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.237985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.238017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.238251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.238286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.238426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.238457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.238626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.238659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.238815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.238847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.239144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.239177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.239332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.239366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.239603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.239635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.239930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.239964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.240191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.240239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.240451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.240484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.240756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.240789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.241111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.241144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.241407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.241441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.241764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.241796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.242085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.242118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.242350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.242383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.242615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.242647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.242943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.242976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.243202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.243247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.243524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.243556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.243857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.243890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.244182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.244214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.244391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.244424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.244727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.244759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.245022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.555 [2024-07-15 13:02:23.245054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.555 qpair failed and we were unable to recover it. 00:27:52.555 [2024-07-15 13:02:23.245328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.245362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.245584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.245617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.245890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.245922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.246145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.246178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.246513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.246547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.246853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.246885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.247167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.247199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.247433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.247466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.247671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.247703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.248023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.248055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.248350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.248384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.248613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.248647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.248903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.248935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.249330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.249363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.249613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.249646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.249899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.249931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.250189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.250222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.250443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.250477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.250783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.250815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.251138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.251171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.251340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.251374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.251577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.251609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.251835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.251868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.252072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.252110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.252354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.252387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.252614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.252646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.252847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.252880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.253186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.253217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.253454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.253486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.253751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.254097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.254129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.254289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.254323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.254549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.254580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.254795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.254828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.255103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.255135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.255486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.255519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.255819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.255851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.256088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.256121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.256326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.256360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.256590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.556 [2024-07-15 13:02:23.256796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.556 [2024-07-15 13:02:23.256829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.556 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.257077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.257335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.257368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.257589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.257622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.257773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.257805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.258005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.258038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.258274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.258309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.258611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.258644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.258889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.258922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.259124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.259156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.259343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.259378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.259658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.259692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.259984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.260017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.260309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.260343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.260644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.260676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.260823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.260856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.261060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.261093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.261323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.261357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.261665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.261698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.261969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.262001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.262322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.262356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.262557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.262589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.262800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.262833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.263107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.263145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.263379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.263412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.263692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.263725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.264029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.264062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.264371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.264405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.264570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.264602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.264876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.264908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.265180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.265212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.265471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.265504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.265827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.265860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.266013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.266046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.266309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.266343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.266587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.266620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.266833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.266865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.267083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.267117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.267417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.267451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.267759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.267792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.268029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.557 [2024-07-15 13:02:23.268062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.557 qpair failed and we were unable to recover it. 00:27:52.557 [2024-07-15 13:02:23.268320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.268354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.268498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.268530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.268760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.268792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.269012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.269341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.269374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.269690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.269722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.269958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.269992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.270216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.270267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.270566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.270598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.270827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.270860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.271143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.271176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.271436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.271469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.271646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.271678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.271978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.272011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.272335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.272369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.272672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.272704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.272938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.272970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.273182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.273214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.273520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.273553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.273790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.273822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.274099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.274133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.274439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.274472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.274775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.274813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.275099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.275132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.275340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.275373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.275600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.275632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.275908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.275940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.276093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.276125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.276426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.276459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.276681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.276713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.277017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.277050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.277350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.277383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.277523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.277555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.277767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.277799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.278037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.278069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.278369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.558 [2024-07-15 13:02:23.278403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.558 qpair failed and we were unable to recover it. 00:27:52.558 [2024-07-15 13:02:23.278699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.278732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.279027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.279059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.279354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.279388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.279634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.279666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.279962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.279994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.280192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.280236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.280478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.280510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.280740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.280773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.280977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.281009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.281212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.281258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.281582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.281614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.281832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.281865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.282188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.282220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.282515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.282548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.282751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.282783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.283027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.283058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.283333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.283368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.283641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.283674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.283960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.283993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.284215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.284258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.284530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.284562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.284816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.284849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.285075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.285109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.285394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.285427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.285661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.285694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.286007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.286039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.286346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.286381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.286664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.286697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.286883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.286916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.287206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.287250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.287497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.287531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.287691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.287724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.287939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.287972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.288251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.288286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.288453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.288485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.288811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.288844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.289156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.289189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.289485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.289518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.289721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.289754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.289959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.559 [2024-07-15 13:02:23.289993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.559 qpair failed and we were unable to recover it. 00:27:52.559 [2024-07-15 13:02:23.290236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.290271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.290405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.290437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.290597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.290629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.290867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.290900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.291141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.291174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.291384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.291417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.291555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.291587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.291925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.291960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.292248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.292281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.292494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.292526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.292820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.292852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.293150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.293183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.293341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.293376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.293667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.293704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.293976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.294009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.294306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.294341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.294513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.294545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.294827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.294860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.295083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.295115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.295326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.295359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.295588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.295621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.295823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.295855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.296069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.296102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.296395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.296429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.296660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.296692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.296985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.297018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.297261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.297294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.297595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.297628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.297802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.297836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.298175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.298206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.298527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.298779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.298811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.299106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.299139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.299429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.299464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.299764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.299796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.300083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.300115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.300342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.300376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.300545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.300577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.300820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.300852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.301088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.301121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.301288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.560 [2024-07-15 13:02:23.301322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.560 qpair failed and we were unable to recover it. 00:27:52.560 [2024-07-15 13:02:23.301557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.301588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.301877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.301910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.302112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.302143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.302389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.302422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.302723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.302756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.302963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.302995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.303196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.303241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.303490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.303522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.303736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.303770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.304052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.304085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.304366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.304415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.304630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.304662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.304936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.304974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.305134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.305167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.305382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.305416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.305688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.305721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.306017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.306051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.306343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.306378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.306597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.306629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.306869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.306902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.307175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.307208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.307516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.307550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.307825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.307858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.308104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.308137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.308429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.308463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.308700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.308733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.308950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.308983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.309283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.309317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.309531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.309564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.309854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.309887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.310183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.310216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.310482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.310516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.310790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.310822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.311120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.311152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.311390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.311423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.311724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.311756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.311911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.311944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.312246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.312280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.312484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.312516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.312820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.312853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.561 [2024-07-15 13:02:23.313156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.561 [2024-07-15 13:02:23.313188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.561 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.313410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.313444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.313678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.313711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.313935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.313967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.314264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.314298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.314517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.314550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.314840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.314873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.315170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.315202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.315446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.315479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.315765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.315797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.315944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.315977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.316250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.316284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.316553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.316591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.316879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.316911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.317209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.317250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.317520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.317554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.317778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.317811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.318030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.318063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.318267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.318302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.318521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.318553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.318825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.318857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.319181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.319214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.319426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.319459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.319732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.319765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.319988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.320020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.320311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.320345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.320639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.320672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.320894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.320927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.321157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.321189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.321408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.321442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.321594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.321626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.321898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.321931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.322265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.322300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.322598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.322632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.322898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.322930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.323200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.323243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.323552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.323585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.323794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.562 [2024-07-15 13:02:23.323827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.562 qpair failed and we were unable to recover it. 00:27:52.562 [2024-07-15 13:02:23.324028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.324060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.324361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.324397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.324665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.324698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.324992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.325025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.325324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.325357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.325562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.325595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.325820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.325853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.326126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.326159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.326408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.326442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.326664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.326696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.326908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.326940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.327246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.327280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.327504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.327537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.327827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.327860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.328083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.328121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.328424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.328460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.328684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.328717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.328942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.328975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.329182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.329215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.329460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.329493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.329660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.329693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.329965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.329997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.330204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.330266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.330566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.330599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.330854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.330887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.331059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.331091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.331334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.331369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.331673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.331706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.331922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.331955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.332189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.332222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.332504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.332537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.332809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.332845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.333081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.333116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.333270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.333305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.333587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.333621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.333868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.333902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.334126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.334158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.334364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.334398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.563 qpair failed and we were unable to recover it. 00:27:52.563 [2024-07-15 13:02:23.334614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.563 [2024-07-15 13:02:23.334647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.334867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.334899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.335116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.335151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.335372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.335409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.335609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.335642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.335941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.335975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.336267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.336300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.336439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.336471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.336743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.336776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.337052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.337086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.337219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.337264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.337486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.337520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.337723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.337755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.338094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.338127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.338430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.338463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.338682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.338714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.338879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.338917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.339155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.339188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.339404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.339438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.340448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.340496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.340786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.340823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.341082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.341115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.341344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.341380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.341598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.341632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.341870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.341907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.342136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.342170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.342416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.342451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.342742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.342774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.343014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.343046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.343323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.343356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.343585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.343618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.343866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.343900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.344106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.344139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.344415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.344450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.344666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.344698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.344975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.345008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.345284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.345319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.345558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.345591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.345818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.345851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.346127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.346159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.346431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.346465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.564 qpair failed and we were unable to recover it. 00:27:52.564 [2024-07-15 13:02:23.346674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.564 [2024-07-15 13:02:23.346706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.346869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.346902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.347148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.347181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.347418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.347452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.347725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.347759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.347962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.347994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.348280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.348314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.348635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.348668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.348965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.348999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.349301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.349335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.349611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.349643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.349866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.349899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.350126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.350158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.350463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.350496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.350743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.350776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.351086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.351124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.351394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.351428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.351702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.351734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.351969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.352002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.352304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.352337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.352565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.352598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.352750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.352782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.353007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.353040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.353287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.353321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.353546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.353578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.353749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.353783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.353936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.353969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.354121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.354153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.354451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.354485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.354745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.354778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.355038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.355070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.355392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.355425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.355653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.355686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.355996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.356029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.356329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.356361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.356582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.356615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.356837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.356871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.357141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.357172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.357479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.357513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.357723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.357754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.358061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.565 [2024-07-15 13:02:23.358094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.565 qpair failed and we were unable to recover it. 00:27:52.565 [2024-07-15 13:02:23.358310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.358345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.358519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.358551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.358707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.358740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.358980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.359012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.359255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.359289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.359533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.359566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.359789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.359821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.360048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.360081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.360306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.360340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.360641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.360673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.360943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.360976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.361301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.361335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.361590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.361622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.361949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.361981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.362256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.362297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.362605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.362638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.362910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.362943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.363182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.363216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.363535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.363568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.363820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.363853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.364124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.364156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.364472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.364507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.364782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.364814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.364961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.364993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.365235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.365269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.365496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.365530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.365824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.365856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.366087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.366120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.366359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.366393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.366692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.366726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.367076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.367109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.367401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.367435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.367664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.367697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.367830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.367862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.368094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.368127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.368409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.368443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.368671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.368704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.368997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.369030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.369329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.369362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.369655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.369688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.369982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.566 [2024-07-15 13:02:23.370015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.566 qpair failed and we were unable to recover it. 00:27:52.566 [2024-07-15 13:02:23.370306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.370340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.370566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.370599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.370827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.370860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.371149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.371182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.371480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.371515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.371759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.371791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.372114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.372147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.372425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.372460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.372700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.372733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.372935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.372967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.373274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.373308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.373472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.373505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.373783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.373816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.374117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.374155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.374380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.374414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.374625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.374658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.374879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.374911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.375240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.375274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.375501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.375534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.375816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.375849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.376057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.376090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.376386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.376422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.376643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.376676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.376980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.377013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.377298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.377331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.377558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.377592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.377830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.377862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.378088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.378121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.378342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.378376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.378624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.378656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.378823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.378855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.379005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.379039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.379314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.379348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.379571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.379603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.379827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.379860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.567 qpair failed and we were unable to recover it. 00:27:52.567 [2024-07-15 13:02:23.380131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.567 [2024-07-15 13:02:23.380164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.380484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.380517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.380730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.380762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.381057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.381089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.381380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.381414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.381713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.381746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.382034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.382067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.382235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.382269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.382561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.382594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.382871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.382904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.383202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.383243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.383477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.383510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.383721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.383754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.383967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.384000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.384272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.384306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.384528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.384560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.384833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.384866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.385088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.385121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.385444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.385493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.385793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.385826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.386105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.386137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.386408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.386443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.386759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.386791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.387066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.387100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.387316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.387350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.387556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.387588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.387802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.387834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.388134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.388167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.388343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.388377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.388614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.388647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.388851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.388883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.389180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.389212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.389380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.389414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.389617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.389650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.389875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.389908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.390131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.390164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.390388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.390423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.390620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.390653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.390951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.390984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.391212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.391257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.391474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.568 [2024-07-15 13:02:23.391507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.568 qpair failed and we were unable to recover it. 00:27:52.568 [2024-07-15 13:02:23.391743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.391776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.392071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.392103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.392305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.392340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.392556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.392589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.392866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.392900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.393139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.393172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.393511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.393545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.393696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.393729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.393935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.393967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.394278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.394313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.394530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.394563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.394765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.394798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.395117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.395149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.395446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.395480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.395691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.395724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.395930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.395962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.396281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.396316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.396590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.396629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.396925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.396957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.397269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.397304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.397541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.397574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.397871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.397904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.398132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.398166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.398478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.398512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.398787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.398820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.398978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.399011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.399331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.399365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.399617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.399650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.399976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.400010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.400245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.400280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.400435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.400468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.400771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.400804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.401026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.401059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.401302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.401336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.401631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.401663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.401978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.402011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.402286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.402320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.402540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.402573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.402817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.402851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.569 [2024-07-15 13:02:23.403064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.569 [2024-07-15 13:02:23.403097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.569 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.403330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.403365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.403660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.403692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.403987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.404020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.404166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.404199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.404508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.404542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.404835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.404868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.405158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.405191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.405492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.405527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.405812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.405845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.406131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.406163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.406467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.406501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.406708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.406741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.407058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.407092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.407317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.407352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.407666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.407700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.407994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.408027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.408322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.408368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.408683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.408721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.409025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.409058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.409371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.409405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.409679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.409711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.410002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.410035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.410331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.410366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.410582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.410615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.410848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.410880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.411152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.411185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.411424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.411459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.411752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.411785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.412105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.412138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.412270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.412304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.412605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.412638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.412868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.412901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.413186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.413219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.413457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.413489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.413639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.413672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.414372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.414422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.414766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.414799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.415098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.415131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.415451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.415484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.415790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.570 [2024-07-15 13:02:23.415824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.570 qpair failed and we were unable to recover it. 00:27:52.570 [2024-07-15 13:02:23.416042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.416075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.416387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.416422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.416712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.416745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.417042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.417074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.417291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.417325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.417572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.417605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.417926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.417959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.418202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.418255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.418465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.418498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.418781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.418813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.419013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.419045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.419367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.419402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.419695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.419728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.419946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.419978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.420278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.420312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.420631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.420663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.420875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.420908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.421205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.421252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.421556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.421590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.421871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.421904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.422127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.422160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.422456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.422490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.422718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.422750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.423069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.423102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.423319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.423353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.423588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.423620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.423919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.423951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.424247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.424281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.424511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.424543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.424816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.424849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.425162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.425194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.425378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.425412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.425712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.425744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.426050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.426081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.426324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.426358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.571 qpair failed and we were unable to recover it. 00:27:52.571 [2024-07-15 13:02:23.426514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.571 [2024-07-15 13:02:23.426547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.426840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.426872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.427168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.427202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.427477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.427510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.427810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.427843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.428050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.428082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.428382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.428416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.428687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.428719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.428933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.428965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.429193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.429236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.429514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.429546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.429819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.429852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.430075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.430107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.430409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.430443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.430727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.430760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.430973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.431005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.431330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.431365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.431641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.431674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.431983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.432016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.432223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.432267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.432537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.432569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.432840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.432873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.433195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.433237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.433476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.433509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.433731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.433763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.434061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.434093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.434394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.434428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.434668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.434701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.434939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.434971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.435275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.435309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.435596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.435628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.435847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.435879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.436195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.436238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.436540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.436573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.436850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.436883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.437092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.437124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.437434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.437468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.437697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.437728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.438011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.438043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.438261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.438296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.438571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.572 [2024-07-15 13:02:23.438604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.572 qpair failed and we were unable to recover it. 00:27:52.572 [2024-07-15 13:02:23.438821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.438854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.439016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.439049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.439346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.439379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.439668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.439701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.439923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.439955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.440235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.440267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.440541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.440573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.440787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.440819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.441019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.441057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.441329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.441363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.441698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.441730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.441886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.441918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.442215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.442267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.442557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.442588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.442869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.442901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.443120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.443152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.443375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.443408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.443708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.443741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.444025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.444057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.444380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.444413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.444687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.444719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.444920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.444953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.445302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.445336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.445553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.445586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.445863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.445895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.446110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.446142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.446425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.446459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.446758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.446790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.446960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.446993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.447279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.447313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.447614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.447646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.447937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.447969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.448193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.448236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.448538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.448571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.448789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.448822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.449053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.449086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.449290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.449323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.449525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.449557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.449854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.449887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.450255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.450530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.573 [2024-07-15 13:02:23.450563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.573 qpair failed and we were unable to recover it. 00:27:52.573 [2024-07-15 13:02:23.450856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.450889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.451186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.451219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.451535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.451568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.451798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.451830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.452128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.452161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.452383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.452417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.452702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.452735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.453006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.453043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.453377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.453411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.453704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.453737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.453952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.453984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.454260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.454293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.454604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.454637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.454927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.454960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.455168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.455201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.455486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.455518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.455839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.455871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.456164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.456197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.456414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.456447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.456765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.456797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.457088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.457121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.457421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.457455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.457741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.457774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.458093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.458126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.458328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.458361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.458569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.458601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.458921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.458954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.459246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.459280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.459602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.459634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.459837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.459869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.460188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.460221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.460453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.460486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.460775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.460807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.460938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.460971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.461274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.461309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.461542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.461574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.461784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.461816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.462090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.462122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.462441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.462475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.462790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.462822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.463115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.574 [2024-07-15 13:02:23.463147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.574 qpair failed and we were unable to recover it. 00:27:52.574 [2024-07-15 13:02:23.463448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.463482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.463774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.463806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.464104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.464136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.464428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.464463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.464688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.464721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.465016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.465049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.465280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.465319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.465591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.465623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.465897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.465929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.466261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.466296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.466550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.466583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.466895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.466928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.467172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.467204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.467533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.467567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.467859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.467892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.468110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.468143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.468439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.468473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.468689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.468721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.468992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.469025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.469248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.469282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.469492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.469524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.469744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.469776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.470068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.470102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.470418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.470452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.470750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.470783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.470992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.471024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.471332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.471366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.471637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.471668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.471940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.471973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.472298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.472331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.472533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.472566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.472878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.472912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.473240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.473274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.473572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.473605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.473922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.473954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.575 qpair failed and we were unable to recover it. 00:27:52.575 [2024-07-15 13:02:23.474256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.575 [2024-07-15 13:02:23.474290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.474513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.474546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.474831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.474864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.475116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.475148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.475441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.475474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.475748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.475781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.476097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.476131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.476406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.476439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.476655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.476687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.477011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.477043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.477314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.477348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.477494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.477532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.477802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.477835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.478055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.478088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.478375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.478409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.478617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.478649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.478968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.479001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.479288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.479322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.479622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.479655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.479939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.479972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.480129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.480161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.480465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.480499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.480716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.480749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.481000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.481032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.481184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.481216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.481530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.481564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.481863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.481896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.482186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.482219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.482525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.482559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.482703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.482736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.483009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.483041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.483259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.483293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.483468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.483500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.483708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.483741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.483956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.483988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.484346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.484380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.484670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.484703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.484948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.484980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.485266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.485300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.485524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.485556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.485772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.576 [2024-07-15 13:02:23.485805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.576 qpair failed and we were unable to recover it. 00:27:52.576 [2024-07-15 13:02:23.485980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.486013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.486312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.486346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.486568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.486600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.486831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.486863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.487098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.487130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.487428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.487462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.487626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.487659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.577 [2024-07-15 13:02:23.487872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.577 [2024-07-15 13:02:23.487905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.577 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.488222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.488281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.488428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.488462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.488743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.488782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.489039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.489072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.489326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.489360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.489579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.489611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.489848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.489881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.490094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.490126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.490345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.490378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.490617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.490649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.490880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.490912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.491061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.491094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.491433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.491719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.491751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.491985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.492017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.492311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.492345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.492646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.492678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.492914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.492946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.493148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.493180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.493386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.493419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.493690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.493723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.494020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.494051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.494218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.494261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.494564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.494597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.494889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.853 [2024-07-15 13:02:23.494921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.853 qpair failed and we were unable to recover it. 00:27:52.853 [2024-07-15 13:02:23.495192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.495237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.495564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.495596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.495866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.495899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.496170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.496202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.496400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.496434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.496717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.496749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.497034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.497066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.497371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.497406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.497619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.497651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.497948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.497980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.498234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.498268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.498417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.498449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.498651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.498684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.498981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.499013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.499156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.499188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.499408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.499441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.499673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.499705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.499936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.499974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.500253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.500287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.500583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.500615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.500909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.500942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.501178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.501211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.501441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.501474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.501679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.501711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.501913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.501945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.502217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.502490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.502523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.502720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.502752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.503021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.503053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.503356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.503390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.503673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.503705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.503911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.503943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.504143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.504176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.504504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.504538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.504816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.504849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.505175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.505207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.505547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.505580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.505873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.505906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.506056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.506088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.506311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.506345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.854 [2024-07-15 13:02:23.506618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.854 [2024-07-15 13:02:23.506651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.854 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.506963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.506995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.507201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.507245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.507547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.507580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.507873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.507905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.508130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.508162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.508372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.508406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.508623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.508655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.508858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.508890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.509203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.509249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.509478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.509510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.509658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.509691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.509988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.510021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.510266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.510299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.510591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.510624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.510901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.510934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.511268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.511302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.511514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.511552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.511850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.511883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.512168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.512201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.512533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.512566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.512886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.512918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.513234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.513267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.513546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.513579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.513822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.513854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.514126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.514159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.514452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.514487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.514809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.514842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.515162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.515195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.515506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.515539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.515875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.515908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.516132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.516164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.516457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.516490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.516765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.516798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.517109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.517141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.517451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.517484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.517745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.517778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.517925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.517955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.518271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.518305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.518622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.518654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.518952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.518985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.855 qpair failed and we were unable to recover it. 00:27:52.855 [2024-07-15 13:02:23.519238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.855 [2024-07-15 13:02:23.519271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.519578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.519610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.519854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.519886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.520137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.520170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.520504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.520537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.520835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.520867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.521112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.521144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.521372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.521406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.521687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.521719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.521992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.522023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.522248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.522282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.522556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.522588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.522862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.522894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.523178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.523210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.523446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.523479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.523682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.523714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.523867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.523905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.524200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.524245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.524525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.524557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.524779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.524811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.525065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.525097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.525315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.525349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.525644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.525675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.525988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.526021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.526262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.526295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.526448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.526481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.526757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.526789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.527096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.527128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.527366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.527400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.527744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.527776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.527985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.528018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.528246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.528280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.528493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.528526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.528740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.528772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.529080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.529113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.529346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.529380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.529634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.529666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.529955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.529988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.530260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.530293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.530621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.530653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.530947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.856 [2024-07-15 13:02:23.530979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.856 qpair failed and we were unable to recover it. 00:27:52.856 [2024-07-15 13:02:23.531275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.531308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.531525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.531557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.531765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.531798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.532119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.532152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.532435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.532469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.532672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.532703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.532919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.532952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.533119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.533151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.533321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.533355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.533637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.533669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.533900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.533934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.534215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.534258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.534508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.534541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.534840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.534873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.535189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.535222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.535539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.535578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.535807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.535841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.536135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.536168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.536421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.536455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.536761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.536794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.537104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.537138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.537417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.537451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.537702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.537736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.538030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.538063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.538381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.538414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.538631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.538664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.538951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.538985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.539200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.539243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.539402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.539435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.539667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.539699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.539997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.540030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.540200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.540244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.857 qpair failed and we were unable to recover it. 00:27:52.857 [2024-07-15 13:02:23.540449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.857 [2024-07-15 13:02:23.540481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.540773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.540805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.541022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.541055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.541351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.541386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.541660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.541694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.541841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.541871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.542163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.542195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.542477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.542510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.542802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.542834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.543058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.543089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.543402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.543436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.543650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.543682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.543831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.543863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.544107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.544139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.544433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.544467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.544764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.544796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.545042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.545074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.545297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.545331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.545542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.545574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.545768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.545801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.546092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.546125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.546420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.546453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.546691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.546723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.546943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.546981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.547273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.547306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.547522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.547554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.547755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.547787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.547989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.548021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.548260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.548294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.548552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.548585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.548776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.548806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.549080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.549112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.549375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.549408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.549574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.549604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.549873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.549905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.550148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.550181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.550489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.550521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.550689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.550720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.550965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.550998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.551141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.551171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.551481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.858 [2024-07-15 13:02:23.551515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.858 qpair failed and we were unable to recover it. 00:27:52.858 [2024-07-15 13:02:23.551839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.551871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.552082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.552114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.552386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.552419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.552721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.552753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.552916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.552947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.553210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.553251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.553460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.553492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.553741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.553772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.554011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.554043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.554259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.554294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.554590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.554621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.554803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.554834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.555110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.555142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.555440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.555474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.555665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.555697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.555950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.555982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.556196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.556241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.556424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.556457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.556660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.556692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.557012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.557044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.557255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.557290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.557528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.557561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.557860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.557897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.558191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.558236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.558545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.558577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.558877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.558909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.559049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.559082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.559255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.559289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.559509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.559543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.559840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.559873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.560188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.560222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.560479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.560512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.560712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.560746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.560969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.561002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.561251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.561285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.561433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.561465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.561676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.561709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.561858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.561891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.562165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.562197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.562426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.562461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.562711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.859 [2024-07-15 13:02:23.562747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.859 qpair failed and we were unable to recover it. 00:27:52.859 [2024-07-15 13:02:23.562970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.563002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.563216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.563260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.563502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.563535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.563739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.563773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.564073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.564106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.564373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.564406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.564675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.564707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.564980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.565012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.565243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.565278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.565511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.565543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.565798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.565830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.566030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.566063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.566303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.566337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.566542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.566574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.566848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.566880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.567177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.567210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.567565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.567598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.567731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.567763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.568035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.568068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.568276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.568310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.568516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.568548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.568712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.568749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.569066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.569098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.569385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.569420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.569671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.569704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.569954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.569986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.570258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.570291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.570585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.570618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.570756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.570789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.570997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.571030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.571307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.571341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.571495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.571528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.571743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.571775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.571941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.571973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.572271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.572305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.572593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.572626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.572831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.572864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.573087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.573118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.573339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.573373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.573576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.573608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.860 [2024-07-15 13:02:23.573846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.860 [2024-07-15 13:02:23.573878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.860 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.574078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.574112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.574270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.574302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.574514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.574547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.574694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.574724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.574998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.575030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.575171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.575201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.575495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.575528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.575753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.575786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.576020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.576052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.576327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.576360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.576584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.576616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.576900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.576932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.577206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.577245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.577397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.577425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.577660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.577689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.577972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.578000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.578270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.578301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.578534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.578562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.578796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.578824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.579094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.579276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.579305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.579582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.579611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.579924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.579953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.580116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.580144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.580358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.580388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.580662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.580691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.580905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.580934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.581213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.581252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.581494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.581522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.581818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.581847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.582117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.582145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.582381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.582411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.582703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.582733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.582975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.583005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.583216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.583255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.861 qpair failed and we were unable to recover it. 00:27:52.861 [2024-07-15 13:02:23.583478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.861 [2024-07-15 13:02:23.583508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.583731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.583761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.584092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.584122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.584414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.584444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.584670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.584699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.585013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.585042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.585296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.585327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.585599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.585629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.585865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.585896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.586175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.586205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.586517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.586548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.586831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.586861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.587171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.587207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.587540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.587572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.587850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.587881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.588099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.588130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.588394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.588426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.588663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.588692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.588912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.588942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.589156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.589186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.589489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.589520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.589742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.589774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.590069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.590101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.590378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.590410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.590654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.590685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.590942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.590974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.591181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.591213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.591447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.591479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.591705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.591737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.592058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.592090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.592403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.592436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.592737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.592768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.593060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.593091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.593294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.593328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.593581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.593613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.593854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.593886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.594186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.594218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.594466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.594499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.594724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.594756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.595038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.862 [2024-07-15 13:02:23.595070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.862 qpair failed and we were unable to recover it. 00:27:52.862 [2024-07-15 13:02:23.595387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.595420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.595637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.595669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.595953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.595984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.596255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.596289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.596576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.596608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.596838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.596870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.597166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.597197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.597430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.597462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.597681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.597713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.597983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.598014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.598307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.598340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.598502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.598533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.598677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.598714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.598870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.598901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.599065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.599098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.599253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.599287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.599420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.599453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.599671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.599701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.599972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.600005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.600217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.600262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.600484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.600516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.600738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.600770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.600996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.601027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.601240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.601273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.601595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.601627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.601797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.601829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.602051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.602082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.602297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.602330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.602558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.602591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.602810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.602841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.603052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.603084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.603319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.603353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.603555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.603587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.603857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.603888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.604201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.604242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.604392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.604423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.604717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.604748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.604909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.604941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.605073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.605105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.605339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.605373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.605544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.863 [2024-07-15 13:02:23.605576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.863 qpair failed and we were unable to recover it. 00:27:52.863 [2024-07-15 13:02:23.605822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.605852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.606019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.606049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.606325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.606358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.606629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.606661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.606863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.606894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.607145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.607178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.607388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.607421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.607712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.607744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.607912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.607944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.608154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.608184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.608397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.608429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.608634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.608675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.608907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.608936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.609245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.609278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.609570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.609601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.609807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.609840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.610056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.610087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.610375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.610408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.610596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.610627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.610888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.610920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.611208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.611264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.611420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.611453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.611663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.611695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.611916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.611948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.612222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.612267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.612439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.612471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.612740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.612772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.612989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.613021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.613246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.613279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.613414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.613446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.613656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.613688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.613915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.613947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.614108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.614139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.614354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.614387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.614593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.614624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.614946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.614978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.615202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.615243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.615454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.615487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.615641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.615672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.615906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.615937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.616076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.864 [2024-07-15 13:02:23.616106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.864 qpair failed and we were unable to recover it. 00:27:52.864 [2024-07-15 13:02:23.616388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.616435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.616588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.616620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.616772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.616804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.616969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.617001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.617322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.617355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.617580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.617611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.617832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.617863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.618088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.618120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.618397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.618430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.618651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.618682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.618800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.618837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.619065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.619096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.619318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.619351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.619507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.619540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.619745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.619779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.620001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.620031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.620245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.620278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.620426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.620458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.620735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.620767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.620938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.620969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.621191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.621223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.621460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.621492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.621642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.621673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.621897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.621929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.622145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.622177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.622318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.622350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.622491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.622522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.622767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.622799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.623016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.623047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.623188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.623220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.623463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.623495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.623735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.623767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.624059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.624090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.624222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.624268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.624469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.624501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.624652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.624683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.624894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.624925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.625146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.625177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.625337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.625369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.625504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.625537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.865 [2024-07-15 13:02:23.625693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.865 [2024-07-15 13:02:23.625724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.865 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.626022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.626054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.626261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.626295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.626493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.626524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.626728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.626759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.626981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.627012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.627175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.627209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.627455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.627487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.627614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.627646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.627915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.627947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.628166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.628203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.628446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.628478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.628689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.628720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.628938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.628970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.629175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.629206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.629503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.629536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.629805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.629838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.630040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.630072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.630297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.630330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.630502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.630534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.630681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.630713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.630981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.631014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.631219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.631260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.631485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.631517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.631723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.631755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.631903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.631934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.632067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.632099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.632325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.632359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.632574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.632605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.632825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.632856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.633152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.633185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.633393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.633425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.633634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.633666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.633932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.633964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.634181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.634212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.634453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.866 [2024-07-15 13:02:23.634484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.866 qpair failed and we were unable to recover it. 00:27:52.866 [2024-07-15 13:02:23.634678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.634710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.634927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.634959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.635119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.635152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.635318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.635352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.635547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.635578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.635798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.635829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.636097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.636129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.636285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.636318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.636583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.636614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.636910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.636942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.637103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.637133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.637335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.637367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.637584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.637616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.637814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.637846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.638006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.638044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.638210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.638251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.638472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.638504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.638656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.638687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.638830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.638861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.639129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.639161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.639293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.639325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.639537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.639569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.639888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.639919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.640133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.640165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.640354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.640388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.640628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.640660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.640803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.640835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.641052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.641084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.641288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.641322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.641452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.641482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.641632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.641663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.641956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.641987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.642303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.642336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.642535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.642567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.642833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.642864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.643144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.643175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.643395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.643427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.643621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.643653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.643872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.643904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.644115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.644148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.867 qpair failed and we were unable to recover it. 00:27:52.867 [2024-07-15 13:02:23.644351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.867 [2024-07-15 13:02:23.644384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.644606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.644638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.644886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.644917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.645136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.645168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.645318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.645352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.645501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.645532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.645799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.645831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.646033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.646065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.646223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.646265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.646569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.646600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.646748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.646780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.647046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.647077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.647285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.647317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.647519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.647551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.647747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.647782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.648003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.648033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.648242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.648274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.648552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.648584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.648725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.648757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.649046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.649077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.649244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.649278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.649487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.649519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.649723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.649754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.649989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.650020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.650222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.650267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.650411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.650443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.650593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.650623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.650780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.650812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.651014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.651045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.651309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.651343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.651538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.651569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.651764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.651796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.651987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.652018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.652213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.652267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.652468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.652499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.652649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.652679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.652874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.652905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.653101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.653131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.653289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.653322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.653449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.653480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.653743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.868 [2024-07-15 13:02:23.653774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.868 qpair failed and we were unable to recover it. 00:27:52.868 [2024-07-15 13:02:23.653977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.654009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.654153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.654184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.654389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.654422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.654563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.654594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.654861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.654893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.655100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.655131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.655262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.655295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.655415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.655446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.655735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.655766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.655911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.655941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.656070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.656100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.656318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.656351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.656556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.656587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.656796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.656837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.656964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.656995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.657202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.657245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.657393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.657424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.657563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.657595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.657800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.657831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.658035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.658065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.658283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.658313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.658454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.658484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.658720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.658751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.659018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.659048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.659246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.659279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.659469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.659500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.659771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.659802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.660009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.660041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.660223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.660280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.660488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.660519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.660768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.660799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.661086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.661117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.661374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.661406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.661619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.661650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.661798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.661829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.661983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.662014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.662299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.662332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.662590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.662620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.662744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.662775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.663007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.663038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.663348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.869 [2024-07-15 13:02:23.663380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.869 qpair failed and we were unable to recover it. 00:27:52.869 [2024-07-15 13:02:23.663588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.663619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.663863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.663893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.664113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.664144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.664360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.664392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.664524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.664553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.664777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.664807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.664951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.664982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.665195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.665235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.665447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.665478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.665608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.665638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.665929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.665960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.666094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.666132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.666392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.666429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.666621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.666653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.666884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.666915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.667121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.667151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.667397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.667430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.667639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.667671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.667881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.667911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.668074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.668104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.668257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.668290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.668498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.668529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.668791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.668822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.669115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.669146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.669348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.669381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.669606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.669637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.669834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.669864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.670058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.670089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.670223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.670275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.670469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.670499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.670759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.670790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.671097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.671127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.671336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.671387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.671579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.671609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.671867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.870 [2024-07-15 13:02:23.671898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.870 qpair failed and we were unable to recover it. 00:27:52.870 [2024-07-15 13:02:23.672134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.672165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.672318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.672350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.672563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.672594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.672789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.672819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.673057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.673088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.673351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.673383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.673646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.673676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.673814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.673844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.674058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.674089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.674314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.674345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.674615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.674645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.674862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.674892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.675093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.675123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.675344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.675377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.675607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.675638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.675896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.675927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.676211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.676262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.676485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.676520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.676666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.676696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.676901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.676932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.677056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.677277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.677309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.677449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.677479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.677673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.677704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.677867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.677898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.678169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.678200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.678399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.678430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.678562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.678592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.678808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.678839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.678994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.679024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.679214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.679256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.679523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.679553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.679701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.679732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.679918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.679949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.680140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.680170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.680425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.680456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.680716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.680747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.681032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.681062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.681284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.681316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.681466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.681496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.681682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.871 [2024-07-15 13:02:23.681712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.871 qpair failed and we were unable to recover it. 00:27:52.871 [2024-07-15 13:02:23.681919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.681948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.682223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.682263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.682389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.682419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.682625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.682657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.682874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.682905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.683131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.683162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.683419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.683451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.683583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.683614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.683891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.683922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.684131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.684162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.684386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.684420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.684704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.684735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.684915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.684945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.685154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.685185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.685418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.685449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.685685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.685716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.685927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.686106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.686136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.686365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.686396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.686525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.686555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.686817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.686848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.687027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.687057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.687262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.687295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.687493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.687524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.687784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.687814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.687951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.687983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.688124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.688154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.688346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.688378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.688568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.688599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.688818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.688849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.689006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.689037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.689299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.689330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.689601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.689631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.689949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.689979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.690175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.690205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.690480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.690511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.690697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.690727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.691020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.691050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.691270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.691303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.691558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.691589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.872 [2024-07-15 13:02:23.691793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.872 [2024-07-15 13:02:23.691824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.872 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.692082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.692112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.692249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.692281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.692549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.692580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.692726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.692757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.692947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.692978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.693243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.693275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.693476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.693507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.693652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.693681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.693895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.693926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.694051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.694081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.694222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.694273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.694395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.694425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.694631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.694660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.694862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.694892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.695049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.695080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.695201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.695248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.695392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.695422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.695680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.695710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.695903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.695933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.696069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.696098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.696390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.696422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.696626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.696656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.696858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.696888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.697112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.697143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.697345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.697377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.697637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.697667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.697931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.697962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.698110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.698140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.698356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.698387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.698530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.698561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.698762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.698792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.698984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.699013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.699202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.699255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.699531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.699562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.699766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.699796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.699990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.700020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.700258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.700291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.700526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.700556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.700761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.700792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.700946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.700977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.873 [2024-07-15 13:02:23.701187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.873 [2024-07-15 13:02:23.701217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.873 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.701439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.701470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.701741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.701813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.702045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.702078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.702317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.702351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.702595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.702627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.702836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.702868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.703020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.703051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.703265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.703297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.703506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.703537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.703804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.703836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.704116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.704147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.704340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.704372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.704654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.704685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.704881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.704913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.705110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.705150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.705341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.705374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.705635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.705666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.705850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.705882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.706079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.706110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.706336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.706368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.706605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.706638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.706791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.706822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.707104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.707136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.707352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.707385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.707663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.707695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.707837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.707869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.708021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.708051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.708309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.708342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.708547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.708578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.708783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.708814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.709043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.709075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.709298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.709534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.709565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.874 [2024-07-15 13:02:23.709823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-15 13:02:23.709855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.874 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.710127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.710158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.710329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.710361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.710576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.710607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.710801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.710833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.711039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.711070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.711284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.711316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.711628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.711660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.711823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.711860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.712064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.712095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.712254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.712286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.712405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.712436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.712630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.712662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.712895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.712927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.713122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.713153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.713353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.713386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.713585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.713617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.713763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.713794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.713988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.714019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.714287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.714319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.714547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.714578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.714780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.714812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.715016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.715048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.715285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.715318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.715540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.715571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.715724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.715755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.715965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.715997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.716131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.716161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.716301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.716334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.716547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.716579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.716708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.716739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.716996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.717027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.717167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.717198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.717534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.717603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.717818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.717853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.718012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.718044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.718276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.718310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.718452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.718485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.718686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.718718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.718973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.719004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.719140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.719170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.875 [2024-07-15 13:02:23.719306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.875 [2024-07-15 13:02:23.719338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.875 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.719534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.719565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.719751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.719783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.719957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.719988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.720139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.720171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.720319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.720366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.720519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.720551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.720777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.720815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.721096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.721128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.721321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.721354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.721637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.721679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.721937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.721969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.722174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.722205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.722362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.722395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.722541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.722590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.722799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.722830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.723028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.723060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.723204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.723249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.723451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.723485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.723633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.723664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.723811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.723843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.724077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.724109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.724327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.724360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.724501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.724533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.724734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.724765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.724971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.725002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.725262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.725295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.725469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.725500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.725707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.725738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.725943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.725975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.726184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.726215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.726360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.726391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.726608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.726639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.726796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.726827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.726967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.726998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.727243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.727274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.727467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.727498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.727705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.727736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.727956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.727988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.728195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.728235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.728382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.728414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.728623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.876 [2024-07-15 13:02:23.728654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.876 qpair failed and we were unable to recover it. 00:27:52.876 [2024-07-15 13:02:23.728912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.728943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.729081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.729113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.729367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.729397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.729596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.729626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.729783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.729814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.729956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.729986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.730192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.730232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.730383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.730414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.730554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.730585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.730768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.730799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.731008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.731039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.731257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.731289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.731453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.731484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.731690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.731720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.732000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.732032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.732176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.732208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.732450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.732481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.732669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.732701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.732978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.733009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.733269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.733302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.733466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.733497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.733779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.733811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.734071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.734102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.734246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.734278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.734470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.734501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.734658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.734689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.734920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.734952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.735161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.735192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.735399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.735431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.735581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.735612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.735869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.735899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.736108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.736140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.736340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.736372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.736596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.736633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.736779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.736811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.737020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.737050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.737277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.737310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.737492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.737523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.737695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.737727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.737987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.738018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.738178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.877 [2024-07-15 13:02:23.738210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.877 qpair failed and we were unable to recover it. 00:27:52.877 [2024-07-15 13:02:23.738435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.738673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.738704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.738907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.738939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.739071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.739102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.739255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.739288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.739435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.739467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.739617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.739648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.739874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.739905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.740115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.740146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.740418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.740449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.740708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.740740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.740943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.740974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.741177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.741208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.741370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.741401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.741602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.741633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.741836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.741868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.742125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.742156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.742266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.742299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.742585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.742618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.742751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.742789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.742995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.743026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.743217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.743257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.743461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.743493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.743769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.743799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.743923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.743954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.744164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.744195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.744484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.744515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.744726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.744757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.744995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.745026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.745174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.745205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.745442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.745474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.745748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.745779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.745914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.745946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.746163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.746194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.746349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.746381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.746581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.746612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.746814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.747045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.747077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.747252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.747285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.747438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.747469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.878 qpair failed and we were unable to recover it. 00:27:52.878 [2024-07-15 13:02:23.747644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.878 [2024-07-15 13:02:23.747674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.747877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.747908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.748135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.748167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.748289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.748321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.748512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.748543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.748735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.748766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.748959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.748990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.749130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.749161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.749439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.749472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.749668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.749699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.749871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.749901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.750109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.750139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.750361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.750393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.750591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.750622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.750818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.750849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.751047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.751078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.751219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.751259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.751469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.751501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.751709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.751739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.751843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.751874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.752033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.752064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.752199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.752236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.752500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.752531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.752730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.752761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.752893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.752925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.753208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.753251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.753440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.753471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.753624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.753655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.753889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.753920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.754126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.754157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.754416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.754448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.754592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.754624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.754796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.754827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.755033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.755063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.755291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-15 13:02:23.755324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.879 qpair failed and we were unable to recover it. 00:27:52.879 [2024-07-15 13:02:23.755518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.755549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.755672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.755703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.755961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.755992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.756183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.756214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.756425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.756456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.756663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.756693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.756917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.756948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.757100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.757131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.757273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.757304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.757519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.757551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.757783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.757814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.758047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.758078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.758207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.758253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.758443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.758474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.758679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.758710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.758973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.759004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.759194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.759254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.759469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.759501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.759702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.759733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.759872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.759902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.760184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.760215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.760384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.760417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.760698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.760729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.760871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.760902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.761106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.761137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.761422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.761454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.761624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.761655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.761872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.761904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.762136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.762166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.762306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.762339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.762483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.762514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.762726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.762757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.762898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.762929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.763111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.763141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.763346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.763378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.763496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.763527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.763653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.763683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.763802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.763833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.764043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.764074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.764350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.764387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.764541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.764572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.880 qpair failed and we were unable to recover it. 00:27:52.880 [2024-07-15 13:02:23.764774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.880 [2024-07-15 13:02:23.764805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.764960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.764991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.765144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.765175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.765353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.765385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.765639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.765670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.765927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.765958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.766162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.766193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.766421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.766453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.766590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.766622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.766776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.766807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.766957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.766988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.767119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.767150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.767376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.767409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.767697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.767729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.767855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.767885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.768054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.768085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.768278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.768310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.768515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.768546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.768663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.768695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.768894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.768925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.769079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.769110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.769325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.769356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.769511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.769543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.769767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.769798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.770004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.770035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.770314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.770356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.770551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.770583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.770849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.770880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.771080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.771111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.771341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.771374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.771700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.771731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.771927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.771959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.772095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.772126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.772345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.772378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.772535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.772566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.772840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.772871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.773004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.773036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.773319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.773352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.773505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.773536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.773721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111b000 is same with the state(5) to be set 00:27:52.881 [2024-07-15 13:02:23.774146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.774216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.774437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.881 [2024-07-15 13:02:23.774472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.881 qpair failed and we were unable to recover it. 00:27:52.881 [2024-07-15 13:02:23.774760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.774793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.774994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.775025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.775243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.775275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.775487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.775520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.775721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.775753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.775877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.775909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.776099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.776129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.776360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.776393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.776596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.776627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.776780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.776809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.776966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.776999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.777296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.777329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.777547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.777578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.777858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.777890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.778081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.778112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.778256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.778288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.778523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.778554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.778804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.778834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.778976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.779008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.779151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.779182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.779396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.779428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.779622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.779655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.779904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.779935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.780136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.780171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.780418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.780456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.780658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.780689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.780972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.781003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.781195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.781235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.781484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.781515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.781702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.781733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.781934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.781965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.782096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.782127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.782334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.782366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.782661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.782692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.782896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.782928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.783214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.783255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.783461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.783492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.783701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.783733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.783866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.783897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.784159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.784190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.784351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.882 [2024-07-15 13:02:23.784383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.882 qpair failed and we were unable to recover it. 00:27:52.882 [2024-07-15 13:02:23.784594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.784625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.784837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.784868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.785083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.785114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.785402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.785434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.785554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.785590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.785736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.785768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.785968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.785998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.786199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.786238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.786444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.786474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.786669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.786699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.786848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.786881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.787028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.787060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.787346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.787380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.787524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.787554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.787788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.787819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.787977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.788009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.788169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.788200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.788536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.788567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.788776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.788808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.788932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.788963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.789172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.789204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.789361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.789392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:52.883 [2024-07-15 13:02:23.789588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.883 [2024-07-15 13:02:23.789618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:52.883 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.789931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.789971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.790127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.790159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.790296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.790328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.790527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.790558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.790679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.790710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.790853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.790883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.791079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.791112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.791311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.791353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.791589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.791620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.791812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.791847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.791974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.792005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.792156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.792186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.792356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.792388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.792610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.792644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.792861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.792895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.793156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.793188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.793403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.793438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.793729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.793762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.793964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.793995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.794154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.794190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.794326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.794364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.794580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.794612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.794763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.794793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.794909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.794943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.795210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.795264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.795479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.795510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.795675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.161 [2024-07-15 13:02:23.795707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.161 qpair failed and we were unable to recover it. 00:27:53.161 [2024-07-15 13:02:23.795947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.795980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.796264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.796299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.796481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.796511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.796740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.796772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.797053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.797088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.797309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.797341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.797500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.797531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.797688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.797720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.797924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.797954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.798107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.798136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.798394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.798428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.798623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.798654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.798786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.798817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.798956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.798992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.799212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.799254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.799410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.799441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.799586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.799618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.799755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.799785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.799974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.800005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.800267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.800301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.800559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.800592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.800807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.800838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.801049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.801081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.801264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.801296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.801451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.801482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.801680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.801712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.801983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.802014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.802166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.802198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.802450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.802518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.802839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.802876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.803110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.803142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.803278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.803312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.803514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.803546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.803660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.803691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.803835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.803866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.804052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.804083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.804211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.804255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.804401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.804432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.804560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.804592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.804870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.804901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.162 qpair failed and we were unable to recover it. 00:27:53.162 [2024-07-15 13:02:23.805116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.162 [2024-07-15 13:02:23.805148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.805405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.805437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.805641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.805672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.805911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.805942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.806168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.806200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.806367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.806398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.806623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.806654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.806809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.806840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.807059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.807091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.807349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.807382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.807574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.807605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.807885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.807917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.808186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.808217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.808360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.808398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.808615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.808647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.808853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.808884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.809086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.809118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.809380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.809414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.809545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.809575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.809780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.809812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.810003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.810033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.810291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.810323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.810538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.810827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.810858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.811055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.811086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.811244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.811276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.811405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.811437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.811659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.811690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.811844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.811874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.812198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.812239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.812469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.812500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.812676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.812708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.812909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.812940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.813096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.813128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.813317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.813350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.813632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.813664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.813917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.813949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.814172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.814203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.814339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.814371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.814526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.814556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.814873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.163 [2024-07-15 13:02:23.814904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.163 qpair failed and we were unable to recover it. 00:27:53.163 [2024-07-15 13:02:23.815042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.815072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.815283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.815315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.815567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.815747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.815778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.815976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.816008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.816292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.816325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.816560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.816592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.816796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.816827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.816987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.817018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.817213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.817258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.817397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.817427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.817630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.817661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.817916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.817952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.818160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.818191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.818340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.818373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.818604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.818636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.818835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.818866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.819022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.819054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.819264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.819297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.819553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.819585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.819786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.819817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.819951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.819981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.820122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.820152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.820371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.820404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.820597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.820628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.820818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.820850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.821065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.821097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.821242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.821274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.821473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.821504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.821630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.821661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.821884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.821915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.822055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.822086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.822302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.822335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.822535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.822567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.822770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.822801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.822924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.822954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.823239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.823271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.823478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.823508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.823768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.823799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.823967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.823997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.164 qpair failed and we were unable to recover it. 00:27:53.164 [2024-07-15 13:02:23.824121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.164 [2024-07-15 13:02:23.824151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.824333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.824380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.824598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.824629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.824753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.824783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.825060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.825091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.825345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.825377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.825608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.825639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.825780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.825810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.826010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.826040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.826254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.826285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.826482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.826514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.826737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.826768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.826876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.826914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.827046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.827077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.827305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.827336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.827481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.827512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.827713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.827744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.827945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.827977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.828104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.828134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.828350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.828382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.828611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.828641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.828849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.828880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.829159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.829190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.829482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.829514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.829797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.829829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.829989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.830020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.830243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.830276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.830465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.830496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.830670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.830701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.830900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.830930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.831122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.831153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.831379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.831413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.831560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.165 [2024-07-15 13:02:23.831591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.165 qpair failed and we were unable to recover it. 00:27:53.165 [2024-07-15 13:02:23.831785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.831816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.832035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.832067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.832206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.832247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.832532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.832565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.832703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.832733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.832925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.832956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.833218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.833260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.833542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.833573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.833780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.833811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.833960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.833991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.834179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.834210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.834334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.834366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.834558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.834589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.834846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.834877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.835036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.835068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.835219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.835260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.835397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.835428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.835650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.835680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.835841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.835872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.836069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.836110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.836403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.836437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.836654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.836685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.836886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.836917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.837132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.837163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.837328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.837362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.837586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.837617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.837844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.837876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.838095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.838126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.838337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.838369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.838563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.838594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.838785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.838816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.839072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.839103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.839312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.839343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.839480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.839510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.839707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.839740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.839931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.839963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.840164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.840196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.840335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.840368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.840503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.840534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.840788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.840819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.166 qpair failed and we were unable to recover it. 00:27:53.166 [2024-07-15 13:02:23.841027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.166 [2024-07-15 13:02:23.841058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.841249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.841281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.841472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.841503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.841809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.841841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.842031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.842062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.842256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.842289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.842509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.842542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.842754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.842785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.843057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.843089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.843364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.843397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.843545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.843576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.843851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.843883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.844108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.844140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.844365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.844397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.844612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.844643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.844834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.844866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.845070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.845101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.845301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.845334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.845536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.845568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.845720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.845755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.845880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.845910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.846104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.846136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.846396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.846428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.846703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.846735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.846871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.846903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.847110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.847141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.847366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.847399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.847593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.847624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.847839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.847871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.848123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.848154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.848372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.848404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.848535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.848565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.848800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.848831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.849053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.849083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.849189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.849220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.849448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.849479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.849787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.849819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.850077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.850109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.850245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.850278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.850482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.850513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.167 qpair failed and we were unable to recover it. 00:27:53.167 [2024-07-15 13:02:23.850701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.167 [2024-07-15 13:02:23.850732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.850853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.850883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.851158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.851189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.851407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.851439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.851581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.851614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.851820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.851852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.852114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.852146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.852280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.852313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.852574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.852606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.852760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.852791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.853005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.853034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.853175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.853206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.853476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.853508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.853788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.853819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.853966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.853998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.854206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.854248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.854450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.854481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.854676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.854708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.854884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.854915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.855127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.855162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.855374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.855406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.855599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.855631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.855912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.855943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.856223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.856263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.856398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.856430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.856711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.856742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.856863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.856894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.856996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.857028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.857240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.857273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.857480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.857511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.857635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.857666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.857866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.857898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.858108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.858138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.858288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.858320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.858522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.858554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.858676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.858707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.858896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.858928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.859130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.859162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.859419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.859452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.859667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.859698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.859893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.168 [2024-07-15 13:02:23.859925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.168 qpair failed and we were unable to recover it. 00:27:53.168 [2024-07-15 13:02:23.860206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.860264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.860472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.860504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.860707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.860738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.860888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.860918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.861121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.861151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.861476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.861546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.861758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.861791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.861949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.861980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.862110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.862141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.862335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.862366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.862567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.862599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.862828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.862859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.863009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.863040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.863241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.863274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.863560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.863591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.863800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.863831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.863962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.863993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.864136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.864166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.864374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.864414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.864689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.864720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.864845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.864875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.865075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.865107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.865367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.865400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.865657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.865688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.865881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.865912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.866147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.866179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.866444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.866477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.866672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.866703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.866901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.866932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.867143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.867175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.867335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.867367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.867558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.867589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.867726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.867757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.867956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.867988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.868183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.868214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.169 qpair failed and we were unable to recover it. 00:27:53.169 [2024-07-15 13:02:23.868426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.169 [2024-07-15 13:02:23.868457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.868589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.868619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.868816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.868848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.869103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.869134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.869275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.869309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.869507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.869539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.869803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.869835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.870111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.870143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.870289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.870321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.870617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.870648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.870978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.871048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.871262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.871299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.871600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.871632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.871781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.871812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.871950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.871980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.872267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.872299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.872493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.872525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.872726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.872756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.873031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.873063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.873259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.873290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.873570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.873602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.873884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.873915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.874071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.874103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.874265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.874306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.874590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.874621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.874822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.874854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.875110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.875140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.875275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.875306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.875564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.875594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.875809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.875839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.875989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.876019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.876156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.876189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.876351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.876384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.876592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.876622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.876752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.876782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.876990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.877021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.877303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.877335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.877602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.877634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.877916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.877947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.878087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.170 qpair failed and we were unable to recover it. 00:27:53.170 [2024-07-15 13:02:23.878322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.170 [2024-07-15 13:02:23.878355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.878631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.878663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.878863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.878894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.879104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.879135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.879402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.879434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.879638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.879669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.879880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.879912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.880088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.880119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.880376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.880408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.880665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.880695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.880958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.880990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.881135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.881165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.881396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.881429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.881708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.881739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.881960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.881991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.882221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.882264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.882469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.882499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.882725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.882756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.882896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.882927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.883135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.883167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.883397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.883428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.883687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.883719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.884001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.884033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.884261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.884307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.884513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.884545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.884751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.884782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.885059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.885090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.885248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.885280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.885418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.885449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.885585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.885616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.885873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.885905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.886160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.886191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.886406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.886438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.886630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.886661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.886794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.886826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.887043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.887074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.887332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.887364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.887571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.887602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.887811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.887842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.888049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.888080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.888365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.171 [2024-07-15 13:02:23.888397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.171 qpair failed and we were unable to recover it. 00:27:53.171 [2024-07-15 13:02:23.888588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.888619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.888813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.888844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.889073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.889104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.889317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.889349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.889569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.889599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.889745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.889776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.889967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.889998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.890171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.890203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.890444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.890477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.890676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.890915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.890946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.891135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.891165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.891378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.891409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.891546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.891578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.891775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.891806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.891951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.891982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.892266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.892299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.892521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.892552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.892755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.892787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.893040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.893072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.893277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.893310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.893586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.893617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.893819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.893850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.893992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.894024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.894282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.894313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.894510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.894540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.894680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.894710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.894934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.894966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.895158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.895189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.895392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.895424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.895657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.895689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.895889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.896080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.896111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.896306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.896337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.896533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.896563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.896753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.896784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.896985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.897016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.897219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.897260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.897519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.897550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.897705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.897736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.172 qpair failed and we were unable to recover it. 00:27:53.172 [2024-07-15 13:02:23.898016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.172 [2024-07-15 13:02:23.898048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.898204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.898245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.898517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.898548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.898751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.898782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.898998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.899029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.899172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.899203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.899432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.899742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.899773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.900052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.900082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.900217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.900265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.900490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.900521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.900797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.900827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.901112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.901144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.901343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.901375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.901505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.901536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.901816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.901846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.902014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.902045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.902251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.902284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.902495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.902526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.902657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.902689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.902832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.902863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.903057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.903088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.903303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.903335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.903490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.903534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.903819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.903850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.903977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.904010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.904222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.904280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.904430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.904462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.904652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.904683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.904822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.904853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.904983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.905014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.905275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.905308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.905447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.905478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.905734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.905765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.905958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.905990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.906261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.906293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.906498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.906529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.906725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.906756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.907014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.907045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.907187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.907218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.173 [2024-07-15 13:02:23.907352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.173 [2024-07-15 13:02:23.907383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.173 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.907581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.907613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.907810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.907841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.907990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.908020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.908302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.908335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.908553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.908584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.908796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.908827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.908980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.909011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.909147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.909178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.909383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.909421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.909624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.909655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.909784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.909816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.910006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.910038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.910288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.910320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.910535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.910566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.910693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.910725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.910924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.910955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.911146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.911177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.911415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.911447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.911641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.911673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.911880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.911911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.912103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.912134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.912392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.912423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.912573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.912605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.912813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.912845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.913084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.913115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.913324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.913356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.913571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.913602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.913752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.913783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.913985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.914017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.914189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.914220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.914427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.914459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.914598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.914629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.914828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.914859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.915061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.174 [2024-07-15 13:02:23.915092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.174 qpair failed and we were unable to recover it. 00:27:53.174 [2024-07-15 13:02:23.915281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.915313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.915529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.915561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.915706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.915737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.915996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.916027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.916234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.916271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.916545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.916577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.916793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.916825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.917084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.917115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.917261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.917293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.917590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.917621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.917822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.917852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.918041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.918073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.918284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.918316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.918519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.918551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.918812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.918849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.919132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.919163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.919356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.919389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.919593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.919625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.919774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.919805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.920055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.920087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.920300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.920332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.920562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.920593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.920722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.920753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.920908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.920939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.921063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.921095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.921251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.921282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.921490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.921521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.921810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.921842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.922052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.922083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.922286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.922317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.922444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.922474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.922753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.922785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.922994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.923024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.923163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.923195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.923425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.923457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.923695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.923726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.923990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.924022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.924208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.924248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.924508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.924540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.924751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.924784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.175 qpair failed and we were unable to recover it. 00:27:53.175 [2024-07-15 13:02:23.924987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.175 [2024-07-15 13:02:23.925018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.925289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.925322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.925618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.925649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.925855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.925887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.926138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.926169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.926322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.926355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.926564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.926597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.926829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.926860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.927023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.927054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.927288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.927321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.927447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.927478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.927682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.927714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.927924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.927955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.928159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.928190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.928324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.928375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.928584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.928617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.928873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.928905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.929046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.929077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.929265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.929297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.929440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.929472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.929596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.929628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.929825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.929856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.930004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.930035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.930303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.930336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.930610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.930641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.930836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.930868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.931089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.931120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.931330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.931363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.931579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.931611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.931799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.931830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.932046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.932078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.932288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.932322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.932528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.932559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.932705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.932735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.932876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.932907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.933101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.933131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.933269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.933301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.933500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.933532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.933746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.933777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.934035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.934067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.934290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.934323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.176 qpair failed and we were unable to recover it. 00:27:53.176 [2024-07-15 13:02:23.934531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.176 [2024-07-15 13:02:23.934562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.934764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.934795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.935016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.935048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.935242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.935274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.935479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.935510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.935638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.935669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.935808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.935840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.936121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.936152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.936415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.936447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.936661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.936692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.936915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.936945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.937158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.937190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.937474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.937507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.937715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.937752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.938033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.938065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.938201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.938242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.938439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.938471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.938736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.938767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.938968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.939211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.939397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.939428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.939581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.939612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.939886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.939917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.940110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.940141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.940277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.940310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.940519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.940550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.940682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.940712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.940998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.941031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.941172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.941203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.941441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.941474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.941737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.941767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.941909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.941940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.942166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.942197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.942439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.942471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.942635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.942667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.942887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.942918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.943151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.943182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.943449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.943481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.943684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.943716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.943908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.943940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.944139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.944171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.177 qpair failed and we were unable to recover it. 00:27:53.177 [2024-07-15 13:02:23.944408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.177 [2024-07-15 13:02:23.944441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.944600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.944631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.944899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.944931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.945075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.945105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.945312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.945344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.945607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.945637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.945865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.945894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.946103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.946133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.946329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.946361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.946499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.946531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.946738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.946770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.947028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.947061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.947265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.947303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.947448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.947479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.947760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.947791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.947984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.948016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.948171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.948202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.948404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.948436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.948644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.948674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.948981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.949012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.949262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.949296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.949438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.949468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.949772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.949804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.949955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.949985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.950188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.950218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.950510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.950542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.950759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.951046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.951078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.951354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.951387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.951574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.951605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.951802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.951834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.952028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.952057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.952362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.952394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.952590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.952622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.952777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.178 [2024-07-15 13:02:23.952808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.178 qpair failed and we were unable to recover it. 00:27:53.178 [2024-07-15 13:02:23.953002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.953033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.953246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.953278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.953472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.953503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.953710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.953741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.954009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.954041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.954167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.954197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.954492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.954524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.954784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.954815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.954965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.954996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.955154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.955184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.955330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.955363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.955647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.955679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.955826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.955858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.956127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.956157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.956363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.956396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.956605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.956636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.956851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.956883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.957031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.957066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.957330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.957363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.957509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.957540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.957820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.958047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.958079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.958342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.958374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.958567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.958599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.958789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.958820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.958950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.958980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.959198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.959235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.959446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.959476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.959673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.959703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.959900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.959931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.960074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.960106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.960296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.960329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.960489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.960518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.960813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.960844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.961121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.961152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.961280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.961313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.961509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.961540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.961821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.961852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.962045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.962075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.962342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.962374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.179 [2024-07-15 13:02:23.962513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.179 [2024-07-15 13:02:23.962542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.179 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.962738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.962770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.962964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.962995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.963144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.963175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.963395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.963428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.963643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.963675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.963813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.963844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.964108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.964138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.964265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.964298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.964497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.964528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.964680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.964710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.964902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.964932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.965071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.965103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.965293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.965325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.965543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.965575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.965702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.965732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.965923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.965953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.966160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.966209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.966485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.966517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.966717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.966749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.967049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.967080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.967207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.967246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.967470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.967501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.967718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.967750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.968032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.968063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.968203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.968245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.968446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.968478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.968768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.968798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.969011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.969043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.969328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.969361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.969515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.969547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.969812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.969844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.969991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.970022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.970174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.970204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.970419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.970450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.970734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.970766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.970977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.971007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.971201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.971238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.971451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.971482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.971715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.971745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.971869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.180 [2024-07-15 13:02:23.971899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.180 qpair failed and we were unable to recover it. 00:27:53.180 [2024-07-15 13:02:23.972091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.972123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.972247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.972278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.972596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.972628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.972978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.973049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.973219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.973272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.973516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.973550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.973811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.973843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.974046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.974078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.974264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.974297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.974557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.974588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.974796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.974828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.975013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.975045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.975254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.975286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.975495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.975526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.975744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.975775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.975916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.975948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.976150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.976181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.976416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.976450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.976663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.976695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.976839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.976871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.977100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.977133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.977323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.977356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.977567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.977597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.977780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.977812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.977972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.978006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.978212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.978256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.978450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.978482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.978745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.978776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.978918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.978948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.979157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.979188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.979475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.979515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.979705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.979736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.979886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.979917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.980199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.980241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.980453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.980484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.980605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.980637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.980848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.980880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.981110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.981142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.981459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.981492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.981634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.981666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.981865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.981897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.181 qpair failed and we were unable to recover it. 00:27:53.181 [2024-07-15 13:02:23.982118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.181 [2024-07-15 13:02:23.982150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.982387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.982616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.982648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.982956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.982989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.983222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.983265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.983544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.983576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.983730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.983761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.983977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.984008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.984165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.984196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.984525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.984593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.984817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.984852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.985013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.985052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.985196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.985243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.985527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.985559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.985691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.985722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.985914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.986145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.986183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.986407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.986439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.986602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.986634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.986840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.986872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.987159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.987191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.987379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.987412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.987605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.987636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.987785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.987815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.987957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.987988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.988130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.988161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.988356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.988390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.988672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.988704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.988870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.988902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.989175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.989205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.989483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.989515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.989657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.989689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.989846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.989878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.990151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.990182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.182 [2024-07-15 13:02:23.990389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.182 [2024-07-15 13:02:23.990420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.182 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.990680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.990712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.990932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.990964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.991090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.991121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.991333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.991366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.991575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.991606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.991894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.991930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.992153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.992185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.992481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.992516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.992797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.992832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.993074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.993106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.993256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.993291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.993516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.993548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.993774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.993810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.994032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.994064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.994367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.994406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.994639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.994680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.994965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.995008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.995141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.995172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.995443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.995476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.995679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.995714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.995926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.995957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.996100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.996144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.996275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.996308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.996428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.996458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.996641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.996674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.996818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.996851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.997068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.997100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.997319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.997356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.997584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.997616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.997812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.997845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.998129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.998161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.998451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.998485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.998703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.998736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.999011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.999042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.999261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.999293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.999453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.999485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.999685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.999716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:23.999948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:23.999979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:24.000122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:24.000153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:24.000306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:24.000338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.183 qpair failed and we were unable to recover it. 00:27:53.183 [2024-07-15 13:02:24.000529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.183 [2024-07-15 13:02:24.000565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.000763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.000795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.000940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.000972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.001253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.001287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.001517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.001550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.001746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.001918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.001948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.002168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.002199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.002473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.002512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.002713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.002745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.002854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.002885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.003102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.003135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.003280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.003313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.003460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.003496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.003753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.003785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.003933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.003963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.004171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.004201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.004419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.004451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.004653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.004684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.004893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.004924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.005120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.005152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.005291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.005329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.005565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.005597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.005808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.005846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.005979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.006009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.006199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.006238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.006447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.006479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.006676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.006707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.006933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.006964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.007108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.007140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.007349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.007382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.007509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.007541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.007750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.007782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.007926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.007957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.008166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.008198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.008480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.008512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.008703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.008735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.008858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.008889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.009017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.009049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.009238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.009270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.009461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.184 [2024-07-15 13:02:24.009492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.184 qpair failed and we were unable to recover it. 00:27:53.184 [2024-07-15 13:02:24.009748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.009780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.010076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.010110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.010397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.010433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.010561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.010603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.010717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.010749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.010957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.010989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.011151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.011185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.011352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.011386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.011509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.011541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.011798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.011832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.012094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.012125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.012328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.012362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.012524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.012556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.012757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.012788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.013060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.013092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.013313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.013346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.013491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.013523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.013649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.013680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.013821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.013853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.014139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.014170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.014340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.014378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.014545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.014576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.014883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.014914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.015108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.015139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.015409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.015442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.015583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.015614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.015851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.015882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.016052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.016083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.016279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.016311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.016435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.016466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.016673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.016704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.016913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.016944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.017145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.017176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.017462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.017495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.017728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.017960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.017992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.018115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.018147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.018408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.018439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.018646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.018678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.018943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.018974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.185 [2024-07-15 13:02:24.019266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.185 [2024-07-15 13:02:24.019300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.185 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.019449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.019480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.019710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.019741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.019942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.019974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.020242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.020274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.020550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.020582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.020786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.020817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.021081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.021113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.021323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.021356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.021572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.021813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.021844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.022115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.022146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.022359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.022391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.022533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.022564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.022764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.022795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.023076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.023107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.023370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.023402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.023689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.023721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.023936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.023967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.024202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.024243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.024453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.024490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.024683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.024714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.024909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.024941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.025151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.025182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.025409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.025441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.025580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.025611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.025834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.025864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.026076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.026106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.026301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.026333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.026524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.026554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.026835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.026866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.027117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.027148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.027298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.027331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.027526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.027557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.027779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.027810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.028023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.028055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.028260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.028291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.028558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.028590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.028802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.028833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.029037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.029068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.029199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.029238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.186 qpair failed and we were unable to recover it. 00:27:53.186 [2024-07-15 13:02:24.029423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.186 [2024-07-15 13:02:24.029454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.029655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.029686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.029891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.029922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.030110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.030281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.030313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.030597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.030628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.030840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.030871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.031061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.031093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.031242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.031275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.031555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.031586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.031886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.031918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.032132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.032164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.032377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.032423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.032632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.032664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.032877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.032908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.033031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.033063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.033340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.033372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.033656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.033688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.033829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.033860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.034136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.034172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.034373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.034406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.034680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.034710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.034862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.034893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.035103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.035135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.035336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.035368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.035630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.035661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.035807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.035838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.036058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.036089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.036331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.036362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.036493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.036523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.036724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.036756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.037018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.037050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.037264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.037297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.037528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.037560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.187 qpair failed and we were unable to recover it. 00:27:53.187 [2024-07-15 13:02:24.037807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.187 [2024-07-15 13:02:24.037838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.038030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.038061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.038199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.038239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.038518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.038839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.038871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.039104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.039135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.039409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.039442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.039584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.039615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.039814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.039845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.040050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.040082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.040344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.040376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.040582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.040613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.040773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.040805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.040956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.040986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.041204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.041244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.041507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.041539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.041744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.041775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.041955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.041986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.042244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.042277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.042544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.042576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.042777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.042808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.043010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.043041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.043244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.043276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.043553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.043585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.043804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.043835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.044027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.044068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.044275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.044308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.044510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.044541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.044748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.044779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.045009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.045040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.045288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.045320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.045521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.045552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.045684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.045715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.045996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.046028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.046322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.046355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.046515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.046547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.046690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.046721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.046984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.047014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.047243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.047275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.047438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.047468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.047612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.188 [2024-07-15 13:02:24.047645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.188 qpair failed and we were unable to recover it. 00:27:53.188 [2024-07-15 13:02:24.047790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.047822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.048050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.048081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.048312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.048345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.048504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.048534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.048662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.048692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.048913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.048944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.049201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.049241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.049473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.049505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.049662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.049693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.049912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.049944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.050140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.050170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.050408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.050440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.050665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.050696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.050886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.050916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.051121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.051418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.051449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.051643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.051674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.051861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.051892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.052029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.052059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.052198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.052248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.052394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.052425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.052615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.052647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.052866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.052897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.053120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.053151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.053343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.053381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.053523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.053555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.053761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.053792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.054016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.054047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.054242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.054274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.054411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.054442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.054671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.054703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.054918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.054949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.055168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.055200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.055448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.055480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.055681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.055712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.055970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.056001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.056284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.056316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.056626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.056657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.056871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.056903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.057124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.057156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.189 [2024-07-15 13:02:24.057283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.189 [2024-07-15 13:02:24.057315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.189 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.057539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.057570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.057711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.057742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.058030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.058062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.058343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.058375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.058532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.058564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.058823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.058854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.059048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.059080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.059339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.059371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.059514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.059545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.059737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.059767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.060006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.060078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.060311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.060347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.060543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.060576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.060789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.060821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.060963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.060995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.061131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.061161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.061369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.061400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.061530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.061562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.061861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.061893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.062152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.062184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.062387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.062420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.062630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.062661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.062917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.062949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.063209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.063260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.063477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.063509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.063651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.063682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.063967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.063999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.064206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.064249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.064495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.064526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.064687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.064719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.064871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.064903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.065164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.065196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.065422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.065455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.065607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.065637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.065773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.065803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.065994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.066025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.066291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.066325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.066473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.066505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.066660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.190 [2024-07-15 13:02:24.066691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.190 qpair failed and we were unable to recover it. 00:27:53.190 [2024-07-15 13:02:24.066950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.066982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.067259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.067292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.067440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.067472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.067665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.067696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.067899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.067929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.068143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.068173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.068402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.068434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.068690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.068721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.068927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.068957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.069213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.069251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.069508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.069540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.069768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.069800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.070009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.070040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.070190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.070221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.070443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.070475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.070702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.070734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.070937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.070968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.071200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.071240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.071472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.071504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.071781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.071812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.072094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.072125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.072391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.072424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.072692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.072724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.072879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.072910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.073101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.073137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.073395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.073428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.073686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.073717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.073927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.073959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.074168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.074199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.074410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.074443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.074657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.074689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.074827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.074859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.075060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.075091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.075378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.075410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.075668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.075700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.191 qpair failed and we were unable to recover it. 00:27:53.191 [2024-07-15 13:02:24.075914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.191 [2024-07-15 13:02:24.075946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.076246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.076279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.076483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.076515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.076741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.076773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.076993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.077024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.077244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.077277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.077490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.077522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.077807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.077839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.077989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.078021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.078165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.078196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.078482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.078515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.078726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.078758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.079041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.079073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.079211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.079251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.079396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.079427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.079733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.079764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.079989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.080022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.080238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.080271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.080481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.080512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.080718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.080749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.081007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.081040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.081261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.081294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.081509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.081542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.081743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.081773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.082057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.082088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.082302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.082335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.082603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.082635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.082849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.082880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.083092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.083124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.083358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.083401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.083587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.083619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.083815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.083847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.084135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.084166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.084461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.084494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.084752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.084784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.084997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.085028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.085222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.085263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.085422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.085454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.085595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.085625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.085849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.085880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.086134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.192 [2024-07-15 13:02:24.086164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.192 qpair failed and we were unable to recover it. 00:27:53.192 [2024-07-15 13:02:24.086376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.086409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.086667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.086699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.086979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.087011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.087155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.087185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.087478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.087511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.087654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.087685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.087970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.088001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.088153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.088184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.088408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.088439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.088736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.088769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.088964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.088995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.089250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.089283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.089436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.089467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.089676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.089706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.089967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.089999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.090208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.090264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.090536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.090568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.090722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.090754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.090964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.090995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.091279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.091311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.091591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.091623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.091813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.091844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.091988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.092019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.092151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.092183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.092322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.092354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.092559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.092590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.092747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.092778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.093059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.093091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.093249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.093287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.093451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.093483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.093676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.093708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.093911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.093943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.094070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.094100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.094354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.094387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.094652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.094684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.094829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.094862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.095064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.095095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.095240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.095272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.095418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.095449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.095647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.095679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.193 [2024-07-15 13:02:24.095893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.193 [2024-07-15 13:02:24.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.193 qpair failed and we were unable to recover it. 00:27:53.194 [2024-07-15 13:02:24.096129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.194 [2024-07-15 13:02:24.096160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.194 qpair failed and we were unable to recover it. 00:27:53.194 [2024-07-15 13:02:24.096374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.194 [2024-07-15 13:02:24.096406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.194 qpair failed and we were unable to recover it. 00:27:53.473 [2024-07-15 13:02:24.096642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-07-15 13:02:24.096674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-07-15 13:02:24.096887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-07-15 13:02:24.096918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-07-15 13:02:24.097123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-07-15 13:02:24.097154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-07-15 13:02:24.097355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.097387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.097534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.097565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.097803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.097836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.098149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.098181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.098414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.098447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.098640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.098672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.098933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.098965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.099172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.099203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.099350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.099382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.099524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.099555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.099747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.099985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.100015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.100152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.100183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.100407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.100440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.100658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.100689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.100817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.100847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.101101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.101134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.101346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.101380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.101525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.101557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.101680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.101711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.101849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.101880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.102088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.102118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.102326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.102357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.102565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.102597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.102722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.102752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.102883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.102913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.103194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.103234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.103506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.103538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.103748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.103780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.104048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.104078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.104360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.104393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.104675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.104707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.104922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.104952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.105153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.105185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.105400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.105433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.105650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.105682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.105891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.105922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.106180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.106212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.106437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.106470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.106759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.106790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-07-15 13:02:24.107072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-07-15 13:02:24.107103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.107385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.107417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.107621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.107653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.107862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.107893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.108117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.108148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.108360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.108393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.108534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.108566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.108756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.108788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.108979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.109009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.109214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.109257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.109541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.109573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.109774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.109805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.109943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.109974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.110192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.110230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.110528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.110559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.110819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.110850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.110996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.111027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.111151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.111182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.111488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.111521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.111735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.111767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.111898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.111929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.112136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.112168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.112477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.112510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.112663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.112694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.112819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.112849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.113056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.113087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.113289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.113322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.113461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.113492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.113619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.113649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.113852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.113882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.114173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.114205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.114366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.114396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.114648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.114679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.114883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.114914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.115127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.115158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.115360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.115393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.115683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.115715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.115931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.115963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.116232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.116266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.116471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.116503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.116727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-07-15 13:02:24.116758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-07-15 13:02:24.116897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.116928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.117081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.117112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.117247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.117279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.117510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.117542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.117746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.117777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.118005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.118037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.118175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.118206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.118356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.118388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.118597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.118632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.118841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.118873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.119065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.119096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.119289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.119322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.119466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.119498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.119646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.119677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.119823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.119854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.120049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.120081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.120236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.120269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.120461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.120492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.120683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.120715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.120838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.120869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.121057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.121089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.121322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.121355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.121501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.121532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.121797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.121829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.122033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.122064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.122345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.122377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.122665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.122697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.122846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.122878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.123091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.123123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.123331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.123364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.123520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.123551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.123682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.123714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.123975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.124007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.124200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.124241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.124454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.124485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.124628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.124659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.124811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.124843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.125038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.125069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.125293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.125326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.125529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-07-15 13:02:24.125561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-07-15 13:02:24.125796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.125828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.126093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.126125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.126280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.126312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.126452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.126597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.126628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.126835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.126866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.127000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.127032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.127241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.127273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.127558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.127594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.127723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.127755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.127959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.127990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.128186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.128217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.128419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.128451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.128733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.128765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.129026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.129058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.129195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.129235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.129380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.129411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.129609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.129641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.129919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.129951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.130139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.130170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.130387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.130420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.130646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.130677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.130834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.130866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.131060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.131092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.131265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.131296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.131577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.131608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.131896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.131928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.132067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.132098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.132308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.132340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.132529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.132560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.132763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.132794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.132904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.132935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.133072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.133102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.133319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.133353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.133506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.133537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.133822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.133854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.134080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.134112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.134258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.134291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.134487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.134518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.134729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.134760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.134878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-07-15 13:02:24.134909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-07-15 13:02:24.135114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.135144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.135366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.135400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.135594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.135626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.135887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.135918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.136198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.136238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.136518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.136551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.136837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.136868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.137148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.137184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.137477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.137510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.137668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.137700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.137986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.138017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.138175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.138206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.138410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.138442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.138631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.138662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.138890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.138922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.139150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.139182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.139463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.139496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.139693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.139725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.139920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.139951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.140152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.140184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.140410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.140442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.140656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.140688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.140968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.140999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.141218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.141259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.141486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.141518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.141734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.141766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.141919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.141950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.142153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.142183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.142413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.142446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.142649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.142680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.142884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.142915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.143127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-07-15 13:02:24.143159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-07-15 13:02:24.143351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.143384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.143661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.143693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.143977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.144009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.144204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.144243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.144513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.144546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.144747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.144778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.144986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.145018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.145302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.145335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.145618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.145650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.145931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.145963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.146123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.146153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.146411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.146443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.146638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.146669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.146860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.146891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.147017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.147047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.147197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.147240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.147502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.147534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.147727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.147759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.147915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.147946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.148134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.148165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.148362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.148675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.148707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.148913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.148945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.149204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.149246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.149386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.149418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.149558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.149589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.149797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.149828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.150020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.150052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.150197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.150236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.150378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.150409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.150730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.150763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.150897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.150927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.151066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.151097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.151297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.151331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.151591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.151622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.151825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.151856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.152005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.152037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.152306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.152338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.152549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.152581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.152735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.152766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-07-15 13:02:24.152980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-07-15 13:02:24.153010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.153144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.153175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.153397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.153558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.153590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.153799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.153831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.154035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.154066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.154257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.154289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.154437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.154468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.154692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.154723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.154918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.154950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.155242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.155275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.155428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.155460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.155684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.155715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.155845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.155877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.156140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.156172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.156439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.156477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.156688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.156720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.156863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.156893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.157042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.157073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.157328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.157362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.157495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.157527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.157741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.157773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.158056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.158087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.158234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.158266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.158469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.158500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.158719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.158750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.159034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.159065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.159262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.159296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.159502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.159533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.159811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.159843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.160070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.160102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.160309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.160342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.160609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.160641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.160830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.160862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.161072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.161104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.161246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.161277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.161438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.161469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.161724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.161756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.162014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.162046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.162307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.162339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.162465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.162496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-07-15 13:02:24.162696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-07-15 13:02:24.162728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.162972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.163004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.163215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.163255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.163410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.163441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.163668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.163699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.163897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.163928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.164072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.164102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.164258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.164292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.164574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.164605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.164883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.164914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.165115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.165147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.165435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.165468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.165727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.165759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.166043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.166074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.166218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.166261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.166566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.166598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.166725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.166756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.166989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.167021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.167249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.167282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.167558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.167590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.167801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.167832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.168117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.168149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.168277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.168310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.168459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.168488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.168631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.168662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.168882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.168913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.169106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.169137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.169342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.169375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.169581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.169612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.169818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.169850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.170136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.170168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.170384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.170416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.170689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.170721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.170929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.170960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.171250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.171284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.171424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.171456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.171724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.171755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.171963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.171995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.172214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.172255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.172511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.172542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.172698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.172729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-07-15 13:02:24.172941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-07-15 13:02:24.172973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.173188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.173220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.173356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.173387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.173597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.173627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.173839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.173871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.174070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.174101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.174300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.174333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.174625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.174656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.174865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.174896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.175062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.175093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.175257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.175483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.175515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.175729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.175759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.176038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.176075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.176240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.176271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.176579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.176611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.176812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.176844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.177106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.177137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.177331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.177364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.177569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.177601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.177806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.177837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.178095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.178126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.178392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.178424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.178575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.178606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.178802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.178832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.179137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.179168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.179350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.179383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.179673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.179705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.179844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.179875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.180103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.180134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.180260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.180292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.180503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.180534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.180792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.180823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.180963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.180995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.181120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-07-15 13:02:24.181152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-07-15 13:02:24.181354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.181386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.181579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.181610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.181745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.181775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.182051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.182083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.182283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.182315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.182551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.182584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.182866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.182897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.183094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.183125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.183321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.183605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.183637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.183769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.183801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.184003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.184035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.184236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.184269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.184547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.184579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.184778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.184808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.184999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.185029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.185159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.185190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.185384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.185416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.185624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.185661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.185919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.185951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.186139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.186170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.186446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.186479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.186712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.186743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.186946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.186977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.187128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.187159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.187367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.187400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.187609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.187640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.187781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.187811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.188074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.188105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.188246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.188277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.188483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.188517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.188661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.188692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.188840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.188870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.188997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.189028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.189211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.189250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.189535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.189566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.189712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.189743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.189952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.189986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.190209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.190248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.190448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.190479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-07-15 13:02:24.190676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-07-15 13:02:24.190707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.190865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.190897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.191041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.191074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.191275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.191309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.191457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.191490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.191677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.191708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.191843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.191875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.192003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.192034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.192172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.192202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.192492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.192524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.192652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.192683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.192845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.192876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.193029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.193060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.193343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.193377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.193569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.193600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.193749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.193780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.193935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.193966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.194180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.194212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.194412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.194449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.194591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.194621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.194777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.194808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.194944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.194976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.195127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.195158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.195296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.195328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.195590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.195623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.195836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.195868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.195995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.196026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.196244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.196277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.196462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.196494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.196632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.196664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.196891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.196921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.197184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.197214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.197486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.197519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.197663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.197693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.197905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.197937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.198155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.198187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.198421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.198453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.198715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.198746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.198880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.198911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.199066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.199097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.199297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.199331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-07-15 13:02:24.199596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-07-15 13:02:24.199628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.199824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.199856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.200045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.200076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.200300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.200332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.200557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.200589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.200785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.200816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.200974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.201005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.201223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.201261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.201461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.201493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.201701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.201732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.201930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.201962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.202070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.202101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.202313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.202347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.202481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.202512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.202647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.202679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.202804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.203031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.203062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.203203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.203249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.203402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.203434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.203622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.203654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.203921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.203952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.204214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.204255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.204463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.204496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.204692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.204723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.204902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.204934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.205077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.205109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.205351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.205385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.205649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.205680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.205872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.205903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.206082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.206113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.206242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.206274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.206437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.206468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.206671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.206703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.206911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.206944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.207142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.207175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.207459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.207492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.207684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.207715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.207974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.208005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.208199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.208238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.208381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.208412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.208618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.208650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-07-15 13:02:24.208784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-07-15 13:02:24.208815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.209075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.209107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.209305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.209338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.209484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.209515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.209706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.209737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.209997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.210029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.210238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.210271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.210406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.210438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.210566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.210598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.210801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.210833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.211041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.211072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.211220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.211260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.211387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.211418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.211572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.211604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.211748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.211780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.212035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.212066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.212204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.212247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.212382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.212413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.212609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.212640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.212923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.212956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.213162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.213194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.213365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.213399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.213652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.213686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.213889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.213921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.214142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.214173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.214337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.214370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.214569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.214602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.214906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.214941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.215082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.215114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.215310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.215343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.215559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.215591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.215783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.215813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.216045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.216076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.216336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.216367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.216574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-07-15 13:02:24.216604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-07-15 13:02:24.216739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.216769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.216966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.217126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.217165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.217323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.217355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.217545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.217576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.217767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.217799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.217975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.218006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.218131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.218161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.218310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.218342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.218554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.218587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.218817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.218848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.218990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.219022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.219156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.219187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.219415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.219448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.219589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.219620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.219832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.219864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.219997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.220029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.220303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.220336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.220572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.220604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.220893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.220925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.221051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.221082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.221244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.221276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.221441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.221472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.221684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.221716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.221921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.221952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.222086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.222117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.222316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.222348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.222490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.222520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.222679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.222710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.222834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.222866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.223003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.223033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.223237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.223269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.223472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.223504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.223719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.223750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.224035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.224068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.224212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.224253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.224537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.224569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.224728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.224760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.224896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.224928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.225136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.225169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-07-15 13:02:24.225385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-07-15 13:02:24.225418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.225544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.225576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.225796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.225827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.226038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.226070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.226207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.226248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.226475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.226508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.226759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.226791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.226947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.226977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.227173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.227209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.227458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.227489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.227632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.227662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.227870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.227902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.228092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.228125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.228262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.228296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.228498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.228528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.228646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.228677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.228889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.228922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.229141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.229172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.229310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.229343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.229471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.229501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.229709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.229740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.229934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.229966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.230104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.230135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.230323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.230356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.230497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.230527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.230682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.230714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.230827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.230857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.231051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.231082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.231291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.231323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.231471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.231502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.231634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.231665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.231803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.231834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.231991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.232021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.232163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.232195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.232420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.232454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.232689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.232720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.232928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.232960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.233097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.233129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.233290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.233323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.233531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.233563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.233688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.233720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-07-15 13:02:24.233976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-07-15 13:02:24.234007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.234206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.234244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.234399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.234430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.234637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.234667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.234807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.234837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.234985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.235018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.235222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.235262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.235392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.235429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.235643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.235674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.235806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.235838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.236052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.236083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.236346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.236379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.236529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.236561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.236778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.236809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.237005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.237037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.237179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.237210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.237412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.237445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.237637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.237669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.237862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.237894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.238085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.238117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.238239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.238271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.238468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.238498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.238688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.238722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.238927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.238960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.239167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.239198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.239394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.239427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.239632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.239663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.239861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.239892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.240147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.240178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.240328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.240373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.240665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.240697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.240901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.240932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.241180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.241212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.241361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.241392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.242927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.242981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.243205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.243253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.243517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.243548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.243690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.243721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.243999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.244030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.244154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.244184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.244425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-07-15 13:02:24.244457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-07-15 13:02:24.244614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.244644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.244840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.244869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.245062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.245093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.245300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.245333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.245460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.245491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.245684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.245714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.245852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.245902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.246124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.246154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.246305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.246335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.246482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.246512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.246714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.246745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.246899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.246929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.247200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.247240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.247453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.247483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.247680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.247709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.247841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.247871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.248022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.248052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.248312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.248343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.248483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.248513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.248778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.248808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.249102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.249133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.249326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.249358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.249559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.249589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.249748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.249777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.249987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.250017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.250217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.250254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.250448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.250479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.250606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.250636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.250901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.250931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.251188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.251218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.251394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.251425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.251575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.251604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.251807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.251836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.252062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.252092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.252382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.252413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.252610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.252639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.252767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.252796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.252990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.253021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.253146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.253176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.253317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.253349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.253481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-07-15 13:02:24.253512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-07-15 13:02:24.253766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.253796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.254053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.254084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.254314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.254345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.254539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.254569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.254763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.254794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.255006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.255042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.255240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.255270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.255494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.255525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.255649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.255678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.255811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.255842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.256011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.256040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.256264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.256296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.256492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.256522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.256720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.256750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.257021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.257051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.257189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.257218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.257373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.257404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.257533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.257563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.257701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.257730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.258019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.258050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.258261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.258293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.258504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.258534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.258667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.258697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.258831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.258860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.258998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.259027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.259290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.259321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.259526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.259557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.259823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.259853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.260075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.260105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.260306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.260336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.260587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.260617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.260771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.260801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.261085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.261116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.261315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.261345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.261478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-07-15 13:02:24.261508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-07-15 13:02:24.261852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.261883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.262075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.262105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.262306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.262337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.262483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.262513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.262696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.262725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.262987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.263016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.263138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.263167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.263359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.263391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.263529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.263559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.263756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.263784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.263937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.263971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.264115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.264145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.264348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.264378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.264512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.264541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.264750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.264779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.264969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.264999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.265123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.265153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.265291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.265321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.265472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.265502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.265694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.265725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.266032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.266063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.266187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.266217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.266377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.266408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.266540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.266570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.266704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.266734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.266934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.266965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.267115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.267145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.267274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.267305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.267454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.267485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.267696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.267726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.267939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.267969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.268164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.268193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.268435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.268467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.268605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.268634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.268768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.268798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.268957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.268989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.269113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.269143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.269290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.269322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.269454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.269483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.269743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.269773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.269902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-07-15 13:02:24.269931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-07-15 13:02:24.270126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.270156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.270346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.270377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.270529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.270559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.270763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.270793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.270927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.270957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.271092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.271123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.271287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.271318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.271520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.271549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.271709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.271737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.271938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.271973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.272099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.272129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.272264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.272295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.272495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.272525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.272764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.272794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.272933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.272963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.273103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.273134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.273286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.273316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.273508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.273537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.273664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.273695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.273819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.273848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.273977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.274006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.274147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.274177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.274321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.274351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.274483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.274512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.274636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.274666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.274786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.274815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.275071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.275102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.275310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.275342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.275486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.275516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.275648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.275888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.275917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.276044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.276073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.276271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.276302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.276457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.276487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.276696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.276727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.276845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.276874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.277266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.277298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.277494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.277525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.277718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.277748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.277974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.278005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-07-15 13:02:24.278128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-07-15 13:02:24.278158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.278419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.278450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.278594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.278624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.278834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.278863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.279000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.279029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.279174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.279204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.279418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.279450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.279649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.279678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.279831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.279860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.279995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.280031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.280273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.280305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.280442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.280473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.280614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.280644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.280780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.280810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.281009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.281039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.281238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.281269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.281396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.281426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.281553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.281584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.281721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.281751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.281889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.281920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.282049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.282079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.282331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.282362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.282510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.282540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.282753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.282784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.282994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.283024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.283219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.283257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.283407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.283438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.283562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.283592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.283740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.283770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.283918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.283948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.284074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.284104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.284365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.284402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.284555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.284585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.284790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.284820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.285082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.285112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.285270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.285301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.285447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.285479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.285654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.285684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.285812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.285843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.286064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.286095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.286335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.286365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-07-15 13:02:24.286494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-07-15 13:02:24.286524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.286720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.286749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.286893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.286923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.287047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.287078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.287359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.287390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.287673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.287703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.287858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.287888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.288027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.288056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.288258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.288300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.288429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.288460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.288669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.288700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.288857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.288888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.289150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.289180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.289324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.289355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.289656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.289686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.289910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.289941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.290145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.290175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.290326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.290357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.290564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.290595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.290821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.290852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.290986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.291016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.291213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.291250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.291515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.291546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.291684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.291714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.291973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.292006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.292144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.292175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.292385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.292416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.292557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.292587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.292811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.292842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.292968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.292998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.293194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.293244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.293382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.293413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.293550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.293580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.293774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.293804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.293911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.293941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.495 [2024-07-15 13:02:24.294174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.495 [2024-07-15 13:02:24.294206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.495 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.294416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.294447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.294643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.294673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.294884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.294914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.295054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.295084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.295210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.295249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.295405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.295435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.295559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.295588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.295788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.295819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.295959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.295990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.296123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.296152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.296309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.296340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.296542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.296573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.296766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.296801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.297002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.297032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.297232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.297264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.297387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.297417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.297564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.297595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.297869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.297900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.298097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.298128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.298359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.298390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.298592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.298622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.298759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.298788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.299070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.299100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.299254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.299284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.299420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.299450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.299596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.299626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.299755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.299785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.299977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.300006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.300138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.300168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.300328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.300358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.300507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.300536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.300749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.300780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.300922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.300952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.301147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.301178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.301423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.301455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.301596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.301626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.301769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.301798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.301958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.301988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.302271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.302303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.302512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.302543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.496 qpair failed and we were unable to recover it. 00:27:53.496 [2024-07-15 13:02:24.302671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.496 [2024-07-15 13:02:24.302700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.302899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.302928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.303119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.303149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.303291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.303320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.303582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.303613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.303764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.303794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.303927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.303957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.304155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.304184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.304397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.304427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.304577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.304606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.304826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.304857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.305005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.305035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.305174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.305208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.305338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.305368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.305507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.305537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.305681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.305710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.305845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.305876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.306006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.306037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.306165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.306194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.306338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.306370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.306505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.306534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.306771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.306800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.306996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.307221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.307260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.307455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.307485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.307623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.307652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.307852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.307883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.308017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.308047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.308245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.308276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.308472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.308502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.308761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.308792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.309020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.309051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.309188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.309217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.309376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.309407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.309598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.309628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.309831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.309860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.310045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.310074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.310268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.310298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.310504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.310534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.310733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.310763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.310967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.310996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.497 qpair failed and we were unable to recover it. 00:27:53.497 [2024-07-15 13:02:24.311139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.497 [2024-07-15 13:02:24.311169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.311416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.311447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.311589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.311619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.311768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.311798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.311997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.312028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.312259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.312290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.312502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.312532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.312745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.312775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.312981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.313011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.313164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.313194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.313359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.313390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.313597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.313634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.313891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.313921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.314070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.314101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.314267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.314298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.314566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.314596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.314734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.314763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.314956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.314985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.315124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.315153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.315359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.315390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.315539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.315570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.315717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.315746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.315885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.315915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.316053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.316083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.316220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.316257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.316406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.316435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.316655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.316685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.316896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.316926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.317060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.317091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.317255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.317285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.317420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.317450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.317668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.317699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.317891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.317922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.318161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.318192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.318352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.318383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.318513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.318541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.318729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.318759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.318956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.318986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.319119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.319148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.319349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.319380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.319513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.319542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.498 qpair failed and we were unable to recover it. 00:27:53.498 [2024-07-15 13:02:24.319665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.498 [2024-07-15 13:02:24.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.319832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.319861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.320059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.320089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.320232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.320263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.320454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.320485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.320742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.320772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.320889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.320918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.321064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.321094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.321298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.321330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.321465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.321496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.321630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.321664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.321817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.321847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.322062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.322092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.322291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.322323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.322555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.322729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.322759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.322913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.322942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.323107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.323136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.323282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.323314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.323503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.323533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.323741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.323769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.323898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.323927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.324061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.324092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.324284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.324316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.324448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.324478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.324613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.324644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.324754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.324784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.324937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.324966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.325166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.325197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.325375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.325445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.325606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.325640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.325788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.325818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.326006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.326037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.326248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.326280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.326552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.326584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.499 [2024-07-15 13:02:24.326793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.499 [2024-07-15 13:02:24.326823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.499 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.326976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.327006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.327139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.327169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.327463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.327495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.327696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.327726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.327903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.327933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.328069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.328099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.328246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.328277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.328480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.328510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.328707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.328737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.328889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.328919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.329058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.329088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.329347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.329379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.329516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.329546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.329682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.329713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.329859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.329889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.330091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.330123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.330362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.330393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.330525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.330555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.330689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.330719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.330950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.330980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.331127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.331157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.331312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.331343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.331473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.331502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.331699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.331729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.331990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.332021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.332168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.332198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.332399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.332430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.332561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.332591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.332729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.332759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.332971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.333001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.333142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.333172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.333321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.333352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.333477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.333507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.333685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.333715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.333910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.333940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.334133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.334163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.334306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.334338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.334460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.334489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.334618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.334649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.334782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.500 [2024-07-15 13:02:24.334812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.500 qpair failed and we were unable to recover it. 00:27:53.500 [2024-07-15 13:02:24.334961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.334991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.335184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.335220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.335352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.335382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.335548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.335578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.335707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.335737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.335872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.335903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.336050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.336080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.336217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.336261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.336398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.336427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.336703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.336733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.336883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.336913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.337102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.337132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.337266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.337297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.337499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.337530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.337748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.337779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.337921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.337952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.338175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.338205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.338350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.338381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.338526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.338556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.338764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.338794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.338951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.338981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.339122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.339153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.339351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.339382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.339585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.339615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.339756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.339786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.339940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.339970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.340169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.340199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.340333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.340363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.340597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.340627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.340838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.340869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.341078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.341108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.341301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.341332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.341459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.341489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.341691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.341722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.341873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.341903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.342045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.342075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.342194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.342234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.342436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.342466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.342596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.342626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.342753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.342783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.342923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.501 [2024-07-15 13:02:24.342954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.501 qpair failed and we were unable to recover it. 00:27:53.501 [2024-07-15 13:02:24.343090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.343124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.343262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.343293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.343496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.343527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.343721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.343750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.343946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.343976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.344178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.344208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.344497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.344528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.344735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.344765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.344971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.345001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.345208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.345246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.345461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.345491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.345634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.345664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.345792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.345822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.345961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.345991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.346204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.346244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.346381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.346412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.346623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.346654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.346778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.346808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.347084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.347114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.347325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.347356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.347504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.347534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.347726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.347756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.347959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.347988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.348249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.348280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.348421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.348451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.348637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.348666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.348856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.348885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.349032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.349062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.349185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.349215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.349501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.349532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.349678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.349708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.349854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.349884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.350020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.350050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.350190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.350220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.350446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.350477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.350615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.350646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.350790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.350819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.350969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.350999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.351180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.351210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.351364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.351394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.351660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.502 [2024-07-15 13:02:24.351695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.502 qpair failed and we were unable to recover it. 00:27:53.502 [2024-07-15 13:02:24.351821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.351854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.351990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.352021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.352167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.352199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.352455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.352487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.352623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.352654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.352798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.352828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.353035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.353066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.353195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.353234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.353385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.353416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.353651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.353682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.353870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.353901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.354028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.354058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.354203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.354242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.354512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.354544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.354739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.354770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.354915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.354946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.355070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.355100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.355238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.355269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.355401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.355440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.355667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.355707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.355925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.355970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.356151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.356195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.356439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.356480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.356645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.356684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.356844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.356887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.357034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.357067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.357244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.357279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.357471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.357502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.357610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.357640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.357769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.357799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.357996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.358027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.358161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.358191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.358403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.358437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.503 [2024-07-15 13:02:24.358713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.503 [2024-07-15 13:02:24.358744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.503 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.358869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.358900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.359072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.359102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.359318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.359352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.359494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.359526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.359727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.359758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.359976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.360013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.360143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.360175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.360363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.360397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.360529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.360560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.360756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.360786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.360927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.360957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.361089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.361119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.361324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.361358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.361599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.361629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.361762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.361792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.361937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.361966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.362101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.362131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.362272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.362305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.362454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.362619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.362649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.362851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.362881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.363081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.363111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.363304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.363336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.363531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.363562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.363783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.363813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.363958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.363989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.364275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.364307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.364444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.364473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.364605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.364635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.364771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.364801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.364995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.365026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.365240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.365274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.365472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.365502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.365700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.365730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.365875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.504 [2024-07-15 13:02:24.365905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.504 qpair failed and we were unable to recover it. 00:27:53.504 [2024-07-15 13:02:24.366041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.366071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.366190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.366223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.366391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.366422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.366565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.366601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.366800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.366831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.367046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.367080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.367368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.367401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.367547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.367578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.367726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.367755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.367995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.368025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.368259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.368298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.368424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.368454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.368649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.368678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.368871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.368901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.369091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.369121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.369253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.369285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.369478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.369509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.369651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.369682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.369834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.369865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.369993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.370024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.370306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.370338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.370619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.370649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.370904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.370934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.371059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.371241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.371274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.371414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.371444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.371566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.371596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.371853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.371884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.372011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.372040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.372173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.372203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.372447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.372481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.372690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.372720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.372913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.372943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.373137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.373167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.373307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.373339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.373459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.373490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.373617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.373647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.373856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.373886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.374085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.374115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.374321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.374353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.374613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.505 [2024-07-15 13:02:24.374643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.505 qpair failed and we were unable to recover it. 00:27:53.505 [2024-07-15 13:02:24.374838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.374869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.374994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.375024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.375238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.375268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.375562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.375592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.375738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.375768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.375906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.375936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.376137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.376167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.376375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.376405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.376598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.376628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.376907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.376944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.377070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.377099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.377305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.377336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.377565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.377595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.377786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.377817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.378023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.378054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.378192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.378222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.378370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.378400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.378535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.378565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.378811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.378841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.379052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.379082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.379222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.379275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.379470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.379500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.379630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.379660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.379811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.379842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.380002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.380031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.380180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.380209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.380429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.380459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.380600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.380630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.380828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.380858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.381003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.381032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.381340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.381371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.381573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.381604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.381864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.381894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.382032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.382062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.382210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.382248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.506 [2024-07-15 13:02:24.382388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.506 [2024-07-15 13:02:24.382418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.506 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.382621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.382651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.382793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.382823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.383025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.383056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.383341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.383372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.383507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.383537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.383821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.383851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.383975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.384005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.384143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.384173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.384334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.384365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.384511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.384759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.384789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.384926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.384956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.385093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.385123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.385272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.385308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.385520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.385550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.385694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.385725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.385870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.385899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.386106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.386136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.386347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.386377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.386514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.386544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.386675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.386705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.386837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.386866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.387018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.387048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.387272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.387302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.387442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.387471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.387609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.387639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.387776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.387805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.387987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.388017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.388222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.388261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.388465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.388496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.388631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.388661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.388787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.388817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.389109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.389139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.389347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.389378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.389625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.389655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.389846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.389876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.390000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.390030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.390243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.390275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.390474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.390696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.507 [2024-07-15 13:02:24.390727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.507 qpair failed and we were unable to recover it. 00:27:53.507 [2024-07-15 13:02:24.390866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.390898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.391090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.391120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.391310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.391342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.391481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.391511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.391656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.391686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.391881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.391911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.392101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.392131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.392357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.392387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.392509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.392539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.392660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.392691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.392892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.392922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.393060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.393090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.393291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.393322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.393467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.393502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.393643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.393673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.393915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.393945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.394068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.394098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.394216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.394256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.394410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.394440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.394576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.394606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.394743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.394773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.394904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.394934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.395069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.395099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.395291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.395323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.395448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.395478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.395610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.395639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.395758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.395788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.395937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.395968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.508 [2024-07-15 13:02:24.396204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.508 [2024-07-15 13:02:24.396257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.508 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.396408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.396439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.396636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.396666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.396802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.396832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.396956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.396985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.397191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.397222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.397424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.397454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.397579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.397609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.397867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.397897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.398015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.398046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.398190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.398220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.398520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.398550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.398761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.398792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.398987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.399017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.399211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.399251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.399394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.399425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.399615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.399646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.399855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.399885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.400024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.400054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.400263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.400295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.400447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.400477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.400613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.400643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.400775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.400805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.401039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.401070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.401286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.401318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.401517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.401553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.401674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.401705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.401896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.401926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.402168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.402198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.402419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.402450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.402580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.402609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.402924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.402954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.403172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.403202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.403405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.403435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.403627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.403659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.403816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.403846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.403981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.404011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.404142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.404172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.404384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.404415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.404612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.404643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.404779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.509 [2024-07-15 13:02:24.404809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.509 qpair failed and we were unable to recover it. 00:27:53.509 [2024-07-15 13:02:24.404946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.404976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.405141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.405171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.405304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.405335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.405482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.405512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.405702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.405733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.405927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.405957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.406086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.406116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.406335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.406367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.406518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.406548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.406696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.406726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.510 [2024-07-15 13:02:24.406968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.510 [2024-07-15 13:02:24.406999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.510 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.408728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.408782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.409087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.409119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.409320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.409352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.409556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.409587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.409800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.409830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.410031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.410061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.410276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.410308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.410504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.410534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.410680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.410710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.410873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.410906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.411108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.411139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.411346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.411379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.411584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.411615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.411811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.411848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.412059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.412090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.412220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.412260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.412465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.412495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.412708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.412739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.790 qpair failed and we were unable to recover it. 00:27:53.790 [2024-07-15 13:02:24.412962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.790 [2024-07-15 13:02:24.412992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.413146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.413176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.413330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.413361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.413638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.413668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.413819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.413849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.414003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.414033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.414248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.414279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.414553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.414584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.414727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.414757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.414913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.414943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.415084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.415114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.415382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.415413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.415607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.415638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.415778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.415808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.416015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.416045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.416199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.416239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.416445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.416476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.416678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.416708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.416845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.416874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.417005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.417039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.417175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.417206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.417385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.417417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.417704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.417774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.418079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.418112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.418253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.418286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.418480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.418512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.418723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.418754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.418898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.418928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.419140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.419170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.419487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.419520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.419670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.419700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.419829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.420061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.420093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.420304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.420337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.420545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.420576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.420765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.420804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.420943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.420973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.421108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.421138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.421332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.421364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.421502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.421532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.421733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.791 [2024-07-15 13:02:24.421764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.791 qpair failed and we were unable to recover it. 00:27:53.791 [2024-07-15 13:02:24.421944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.421973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.422176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.422206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.422429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.422460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.422672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.422702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.422963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.422993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.423149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.423179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.423380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.423411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.423559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.423589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.423793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.423824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.424029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.424059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.424221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.424263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.424456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.424486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.424698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.424729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.424944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.424975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.425188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.425217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.425458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.425490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.425711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.425742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.425882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.425912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.426066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.426097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.426300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.426332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.426480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.426510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.426655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.426687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.426827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.426857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.427055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.427086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.427303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.427334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.427545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.427575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.427729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.427760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.427891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.427922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.428064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.428094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.428301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.428332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.428477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.428507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.428715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.428746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.428878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.428908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.429120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.429151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.429285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.429321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.429519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.429765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.429796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.429993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.430023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.430218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.430259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.430465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.430495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.792 [2024-07-15 13:02:24.430630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.792 [2024-07-15 13:02:24.430661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.792 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.430795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.430826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.431041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.431071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.431211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.431251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.431519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.431550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.431739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.431769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.431914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.431944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.432140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.432170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.432379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.432410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.432545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.432574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.432777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.432808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.432938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.432969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.433094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.433123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.433265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.433499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.433530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.433753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.433786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.433942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.433971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.434201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.434239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.434450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.434482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.434691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.434722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.434927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.434958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.435206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.435244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.435443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.435473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.435626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.435657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.435866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.435896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.436088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.436119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.436355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.436386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.436521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.436551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.436697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.436727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.436873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.436903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.437053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.437082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.437209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.437248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.437440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.437471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.437667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.437697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.437911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.437942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.438153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.438183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.438328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.438358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.438509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.438538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.438657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.438687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.438907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.438937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.439075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.439105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.793 [2024-07-15 13:02:24.439294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.793 [2024-07-15 13:02:24.439325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.793 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.439534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.439564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.439851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.439882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.440142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.440172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.440319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.440351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.440479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.440509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.440710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.440739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.440884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.440913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.441150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.441180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.441402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.441433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.441558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.441588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.441732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.441761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.441924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.441953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.442167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.442198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.442355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.442386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.442544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.442575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.442771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.442801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.443028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.443058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.443173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.443203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.443497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.443528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.443736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.443771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.443979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.444010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.444297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.444329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.444547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.444577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.444725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.444755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.444964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.444994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.445279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.445310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.445522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.445553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.445747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.445777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.445984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.446014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.446234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.446265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.446416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.446446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.446705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.794 [2024-07-15 13:02:24.446735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.794 qpair failed and we were unable to recover it. 00:27:53.794 [2024-07-15 13:02:24.446929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.446959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.447104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.447134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.447340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.447371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.447657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.447688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.447878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.448048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.448078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.448409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.448441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.448640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.448670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.448822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.448852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.449002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.449031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.449162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.449192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.449492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.449524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.449669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.449700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.449833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.449863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.450062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.450091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.450284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.450314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.450571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.450600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.450786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.450816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.451076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.451107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.451321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.451352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.451552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.451582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.451784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.451815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.452009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.452038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.452245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.452277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.452491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.452521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.452651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.452681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.452880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.452911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.453112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.453147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.453339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.453370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.453574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.453603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.453796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.453827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.454034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.454064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.454278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.454309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.454528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.454558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.454706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.454736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.454950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.454981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.455243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.455275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.455433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.455463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.455627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.455656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.455851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.455880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.456087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.795 [2024-07-15 13:02:24.456118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.795 qpair failed and we were unable to recover it. 00:27:53.795 [2024-07-15 13:02:24.456261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.456292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.456491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.456521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.456719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.456750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.456916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.456945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.457206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.457245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.457443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.457473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.457705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.457734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.457963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.457992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.458183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.458213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.458338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.458369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.458510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.458538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.458676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.458706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.458900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.458931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.459066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.459096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.459244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.459274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.459468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.459497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.459687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.459717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.459933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.459964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.460105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.460135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.460332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.460364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.460523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.460552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.460682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.460712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.460850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.460880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.461023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.461052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.461176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.461206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.461362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.461394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.461596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.461630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.461890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.461920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.462109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.462139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.462295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.462326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.462477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.462508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.462699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.462730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.462894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.462923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.463117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.463147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.463344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.463374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.463526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.463556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.463752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.463782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.463978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.464008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.464147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.464176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.464484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.464516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.464740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.796 [2024-07-15 13:02:24.464771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.796 qpair failed and we were unable to recover it. 00:27:53.796 [2024-07-15 13:02:24.464984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.465015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.465214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.465252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.465445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.465474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.465601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.465630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.465811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.465841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.466034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.466065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.466198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.466239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.466377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.466406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.466601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.466631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.466786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.466816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.466978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.467007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.467158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.467187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.467346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.467379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.467527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.467556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.467823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.467853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.468074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.468105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.468245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.468276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.468401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.468432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.468564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.468593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.468742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.468771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.468962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.468992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.469121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.469151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.469307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.469337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.469482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.469512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.469653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.469684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.469814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.469978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.470007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.470149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.470180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.470422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.470454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.470639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.470669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.470799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.470828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.471044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.471074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.471198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.471236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.471416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.471447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.471570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.471599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.471754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.471785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.471919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.471949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.472094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.472123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.472325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.472357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.472496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.472526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.472748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.797 [2024-07-15 13:02:24.472778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.797 qpair failed and we were unable to recover it. 00:27:53.797 [2024-07-15 13:02:24.473060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.473090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.473215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.473252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.473378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.473407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.473565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.473595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.473898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.473928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.474081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.474110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.474318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.474349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.474543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.474573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.474720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.474750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.474903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.474933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.475082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.475113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.475251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.475282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.475432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.475461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.475675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.475706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.475920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.475950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.476075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.476105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.476324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.476355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.476544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.476702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.476731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.476958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.476989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.477252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.477284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.477487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.477517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.477765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.477794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.477996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.478027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.478217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.478260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.478472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.478503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.478629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.478660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.478956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.478986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.479177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.479207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.479426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.479457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.479655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.479686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.479831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.479860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.480078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.480108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.480313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.480344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.480482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.480513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.480670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.480699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.480961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.480991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.481258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.481289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.481437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.481466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.481599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.481628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.798 qpair failed and we were unable to recover it. 00:27:53.798 [2024-07-15 13:02:24.481768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.798 [2024-07-15 13:02:24.481798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.481936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.481966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.482223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.482281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.482417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.482447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.482574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.482605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.482802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.482833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.482963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.482992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.483186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.483215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.483383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.483414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.483617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.483646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.483887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.483917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.484077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.484108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.484280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.484311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.484514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.484543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.484800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.484830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.485022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.485051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.485174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.485203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.485445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.485476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.485669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.485700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.485933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.485963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.486089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.486119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.486337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.486367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.487879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.487930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.488244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.488277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.489623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.489677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.489860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.489893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.490083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.490112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.490398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.490431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.490642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.490672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.490905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.490935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.491215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.491257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.491381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.491412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.491697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.491728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.491865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.491895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.492039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.799 [2024-07-15 13:02:24.492068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.799 qpair failed and we were unable to recover it. 00:27:53.799 [2024-07-15 13:02:24.492282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.492312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.493688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.493735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.493962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.493994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.494205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.494272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.494487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.494518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.494718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.494748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.495031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.495062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.495248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.495280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.495431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.495462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.495746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.495776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.495977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.496008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.496168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.496198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.496408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.496438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.496589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.496619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.496755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.496785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.496910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.496941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.497137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.497167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.497371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.497401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.497552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.497581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.497726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.497756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.497882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.497910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.498054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.498082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.498279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.498308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.498517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.498701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.498730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.498867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.498897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.499021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.499051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.499258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.499289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.499569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.499601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.499741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.499776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.499915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.499946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.500085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.500115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.500282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.500313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.500593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.500623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.500829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.500859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.501055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.501086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.501213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.501251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.501394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.501425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.501565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.501596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.501846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.501876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.502091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.800 [2024-07-15 13:02:24.502121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.800 qpair failed and we were unable to recover it. 00:27:53.800 [2024-07-15 13:02:24.502291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.502324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.502478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.502509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.502717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.502748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.502950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.502980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.503251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.503284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.503533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.503563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.503761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.503791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.504003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.504034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.504244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.504276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.504404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.504435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.504573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.504604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.504818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.504848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.505043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.505074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.505292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.505323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.505519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.505549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.505700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.505731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.505953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.505983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.506181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.506211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.506363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.506394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.506527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.506558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.506749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.506779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.506932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.506962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.507087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.507117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.507327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.507359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.507493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.507523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.507715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.507745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.508007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.508037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.508177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.508207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.508476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.508514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.508709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.508739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.508949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.508979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.509245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.509276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.509409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.509440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.509741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.509771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.509939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.509970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.510109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.510140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.510297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.510330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.510482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.510512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.510712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.510742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.510947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.510978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.511104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.801 [2024-07-15 13:02:24.511134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.801 qpair failed and we were unable to recover it. 00:27:53.801 [2024-07-15 13:02:24.511329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.511361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.511502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.511533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.511663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.511693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.511834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.511865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.512001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.512031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.512171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.512201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.512345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.512376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.512499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.512529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.512654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.512684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.512875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.512905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.513097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.513127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.513264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.513294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.513426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.513457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.513661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.513691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.513914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.513945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.514186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.514216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.514417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.514447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.514657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.514687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.514818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.514848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.514980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.515010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.515188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.515219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.515375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.515405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.515599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.515630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.515769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.515799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.516006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.516036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.516247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.516279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.516421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.516451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.516655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.516692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.516867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.516898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.517109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.517139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.517281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.517312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.517451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.517481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.517693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.517723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.517916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.517947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.518144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.518399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.518431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.518571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.518602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.518808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.518838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.519042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.519071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.519207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.519246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.802 [2024-07-15 13:02:24.519373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.802 [2024-07-15 13:02:24.519404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.802 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.519561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.519591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.519797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.519827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.519952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.519982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.520133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.520163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.520305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.520336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.520474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.520503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.520646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.520677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.520979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.521009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.521142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.521172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.521324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.521355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.521516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.521547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.521760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.521790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.521950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.521981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.522181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.522212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.522415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.522445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.522604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.522634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.522769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.522799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.522972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.523002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.523128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.523158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.523297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.523328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.523457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.523488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.523624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.523655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.523797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.523827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.524031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.524061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.524199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.524238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.524442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.524473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.524664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.524699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.524834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.524864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.524996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.525026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.525166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.525197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.525398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.525428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.525620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.525651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.525790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.525820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.526029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.526059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.803 [2024-07-15 13:02:24.526272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.803 [2024-07-15 13:02:24.526304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.803 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.526448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.526478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.526608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.526638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.526768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.526797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.527002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.527032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.527156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.527187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.527339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.527371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.527666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.527696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.527907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.527938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.528135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.528165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.528302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.528333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.528463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.528493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.528695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.528725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.529001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.529031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.529156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.529186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.529391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.529422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.529555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.529585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.529721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.529751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.529878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.529907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.530136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.530206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.530382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.530416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.530611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.530642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.530832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.530863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.531002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.531032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.531247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.531279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.531466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.531497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.531637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.531666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.531799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.531829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.531975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.532004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.532141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.532172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.532320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.532351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.532549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.532579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.532772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.532801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.533013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.533043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.533242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.533273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.533398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.533428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.533564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.533593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.533721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.533751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.533954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.533984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.534105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.534136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.534266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.534297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.534441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.804 [2024-07-15 13:02:24.534470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.804 qpair failed and we were unable to recover it. 00:27:53.804 [2024-07-15 13:02:24.534604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.534634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.534821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.534851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.535112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.535142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.535410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.535441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.535574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.535609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.535821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.535852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.536017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.536047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.536176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.536207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.536410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.536440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.536720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.536749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.536873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.536903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.537063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.537092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.537286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.537317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.537471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.537501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.537707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.537737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.537874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.537903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.538010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.538040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.538138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.538168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.538320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.538351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.538476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.538506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.538644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.538674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.538938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.538968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.539246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.539277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.539419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.539449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.539644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.539674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.539876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.539905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.540054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.540083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.540239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.540270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.540427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.540457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.540691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.540721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.540846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.540876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.541003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.541037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.541244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.541275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.541414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.541444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.541590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.541619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.541741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.541771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.541982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.542011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.542272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.542303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.542529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.542559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.542698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.542728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.542859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.805 [2024-07-15 13:02:24.542889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.805 qpair failed and we were unable to recover it. 00:27:53.805 [2024-07-15 13:02:24.543098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.543128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.543340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.543372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.543507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.543537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.543685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.543715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.543849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.543880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.544011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.544041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.544282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.544313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.544505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.544535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.544662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.544692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.544920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.544950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.545085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.545115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.545317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.545348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.545543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.545573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.545752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.545782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.545972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.546001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.546135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.546164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.546425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.546455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.546593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.546627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.546820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.546851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.547052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.547082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.547237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.547267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.547387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.547417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.547558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.547588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.547710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.547740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.547933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.547963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.548099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.548128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.548256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.548288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.548415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.548445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.548647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.548677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.548870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.549021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.549051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.549246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.549278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.549490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.549520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.549642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.549672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.549877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.549906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.550037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.550067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.550207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.550248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.550397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.550427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.550551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.550581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.550785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.550815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.551073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.551103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.806 qpair failed and we were unable to recover it. 00:27:53.806 [2024-07-15 13:02:24.551223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.806 [2024-07-15 13:02:24.551263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.551460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.551489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.551757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.551787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.551917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.551947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.552145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.552176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.552438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.552470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.552599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.552629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.552761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.552791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.552939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.552969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.553106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.553135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.553288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.553319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.553457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.553486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.553617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.553647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.553773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.553802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.553991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.554021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.554274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.554306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.554566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.554596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.554913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.554981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.555144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.555177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.555344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.555376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.555509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.555539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.555672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.555703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.555923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.555953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.556096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.556126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.556320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.556350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.556499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.556529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.556725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.556755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.556903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.556933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.557132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.557162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.557419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.557450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.557576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.557606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.557748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.557778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.557979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.558009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.558247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.558278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.558482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.558512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.807 qpair failed and we were unable to recover it. 00:27:53.807 [2024-07-15 13:02:24.558662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.807 [2024-07-15 13:02:24.558691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.558835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.558864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.558998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.559027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.559168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.559375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.559444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.559575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.559616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.559770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.559803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.560012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.560042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.560260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.560293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.560512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.560544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.560735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.560766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.560894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.560925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.561066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.561096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.561301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.561332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.561548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.561580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.561737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.561767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.561975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.562005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.562163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.562194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.562340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.562372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.562672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.562702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.562931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.562960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.563115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.563145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.563327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.563365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.563511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.563541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.565402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.565457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.565709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.565743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.566027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.566058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.566211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.566255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.566471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.566501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.566698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.566728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.566919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.566949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.567081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.567111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.567262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.567295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.567553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.567583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.567736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.567767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.567990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.568021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.568187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.568217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.568450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.568481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.568691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.568720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.568871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.568903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.569109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.569140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.808 qpair failed and we were unable to recover it. 00:27:53.808 [2024-07-15 13:02:24.569357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.808 [2024-07-15 13:02:24.569388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.569522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.569552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.569686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.569717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.569904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.569934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.570160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.570191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.570334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.570365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.570505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.570536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.570666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.570697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.570852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.570883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.571030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.571061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.571262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.571294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.571450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.571480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.571677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.571708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.571936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.571966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.572120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.572151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.572299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.572330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.572541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.572572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.572709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.572739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.572938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.572968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.573100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.573131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.573287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.573319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.573606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.573641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.573784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.573814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.574023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.574053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.574247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.574279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.574421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.574596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.574626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.574771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.574801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.575004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.575035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.575240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.575272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.575559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.575590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.575734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.575764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.575982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.576012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.576161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.576191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.576408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.576440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.576706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.576736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.576944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.576975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.577115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.577145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.577285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.577317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.577577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.577609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.577747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.809 [2024-07-15 13:02:24.577778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.809 qpair failed and we were unable to recover it. 00:27:53.809 [2024-07-15 13:02:24.577915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.577945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.578090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.578358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.578390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.578576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.578606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.578753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.578783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.578921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.578952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.579149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.579179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.579352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.579384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.579520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.579551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.579755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.579786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.579988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.580019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.580147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.580178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.580453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.580485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.580696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.580726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.580862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.580892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.581097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.581128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.581383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.581417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.581546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.581577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.581769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.581799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.582006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.582037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.582178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.582214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.582532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.582563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.582712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.582743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.582945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.582975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.583125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.583156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.583304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.583336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.583462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.583493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.583752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.583783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.583909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.583939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.584223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.584261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.584397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.584576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.584606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.584797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.584828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.584970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.585001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.585216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.585256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.585399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.585430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.585631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.585662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.585796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.585827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.585964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.585995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.586220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.586259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.586393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.810 [2024-07-15 13:02:24.586424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.810 qpair failed and we were unable to recover it. 00:27:53.810 [2024-07-15 13:02:24.586565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.586595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.586785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.586815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.586940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.586970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.587162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.587193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.587336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.587368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.587559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.587590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.587704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.587734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.587868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.587898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.588110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.588140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.588274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.588306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.588449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.588479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.588619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.588650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.588851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.588882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.589025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.589055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.589192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.589223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.589449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.589480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.589676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.589706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.589840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.589871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.590067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.590096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.590288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.590324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.590454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.590484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.590621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.590652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.590888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.590918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.591115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.591145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.591337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.591368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.591518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.591549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.591694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.591725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.591917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.591947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.592079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.592109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.592246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.592278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.592438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.592469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.592669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.592699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.592837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.592867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.593109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.593139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.593281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.811 [2024-07-15 13:02:24.593312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.811 qpair failed and we were unable to recover it. 00:27:53.811 [2024-07-15 13:02:24.593454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.593484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.593684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.593715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.593906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.593937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.594073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.594103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.594240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.594271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.594500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.594531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.594672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.594702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.594832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.594862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.595063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.595093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.595236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.595267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.595552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.595582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.595778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.595809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.595945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.595975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.596177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.596207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.596349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.596379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.596572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.596602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.596818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.596848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.596959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.596989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.597191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.597221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.597396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.597427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.597650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.597680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.597933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.597963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.598089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.598120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.598317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.598348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.598486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.598521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.598732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.598762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.598902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.598933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.599083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.599114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.599304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.599335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.599454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.599484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.599675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.599706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.599900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.599930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.600137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.600168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.600303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.600334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.600568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.600598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.600744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.600775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.600987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.601017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.601210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.601248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.601393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.601423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.601569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.601600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.812 [2024-07-15 13:02:24.601733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.812 [2024-07-15 13:02:24.601764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.812 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.601894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.601925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.602075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.602105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.602241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.602273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.602420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.602450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.602593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.602623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.602750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.602780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.602913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.602944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.603208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.603250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.603375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.603406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.603664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.603694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.603905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.603936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.604112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.604142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.604363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.604394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.604587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.604617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.604783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.604813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.604937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.604967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.605275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.605307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.605448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.605479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.605635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.605665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.605871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.605901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.606046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.606076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.606282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.606313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.606569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.606600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.606795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.606835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.606999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.607030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.607223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.607260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.607416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.607446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.607658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.607689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.607949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.607979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.608137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.608167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.608326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.608358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.608569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.608599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.608743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.608773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.608978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.609009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.609220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.609260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.609405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.609436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.609651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.609681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.609885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.609915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.610188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.610219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.610363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.610393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-07-15 13:02:24.610587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-07-15 13:02:24.610617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.610767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.610797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.610987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.611017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.611156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.611187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.611433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.611465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.611700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.611730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.611919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.611949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.612156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.612187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.612438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.612469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.612606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.612637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.612800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.612830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.613045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.613076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.613295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.613327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.613495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.613526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.613740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.613771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.613901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.613932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.614163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.614192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.614458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.614491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.614632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.614663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.614865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.614895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.615045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.615075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.615207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.615245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.615456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.615486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.615749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.615785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.615915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.615946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.616089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.616119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.616261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.616292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.616490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.616521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.616741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.616771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.616971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.617002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.617134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.617164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.617417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.617448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.617619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.617650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.617799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.617829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.618085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.618115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.618325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.618358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.618574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.618604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.618751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.618782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.618971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.619001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.619196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.619232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.619386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-07-15 13:02:24.619417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-07-15 13:02:24.619608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.619637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.619795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.619826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.619960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.619990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.620134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.620165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.620294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.620325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.620537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.620567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.620705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.620735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.620866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.620896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.621054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.621085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.621211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.621270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.621469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.621499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.621704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.621734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.621904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.621935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.622134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.622165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.622367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.622398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.622604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.622634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.622785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.622816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.622947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.622978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.623175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.623205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.623408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.623440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.623565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.623595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.623816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.623846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.623984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.624019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.624164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.624195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.624408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.624577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.624607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.624804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.624834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.624962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.624994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.625272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.625304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.625536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.625568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.625856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.625886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.626091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.626121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.626250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.626281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.626435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.626466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.626674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.626704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.626852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.626882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.627024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.627055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.627264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.627296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.627497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.627527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.627661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.627691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.627841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-07-15 13:02:24.627871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-07-15 13:02:24.628011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.628042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.628187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.628217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.628376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.628407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.628541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.628572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.628719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.628749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.628885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.628916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.629116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.629147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.629286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.629318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.629558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.629627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.629778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.629811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.629943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.629974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.630174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.630205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.630412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.630443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.630637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.630667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.630925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.630955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.631169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.631199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.631324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.631356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.631511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.631541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.631751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.631782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.631979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.632009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.632153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.632183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.632315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.632354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.632493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.632524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.632653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.632683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.632872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.632902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.633089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.633119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.633344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.633375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.633577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.633607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.633768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.633798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.634060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.634090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.634241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.634272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.634412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.634442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.634638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.634668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.634876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.634906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.635142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-07-15 13:02:24.635171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-07-15 13:02:24.635332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.635363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.635510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.635540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.635740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.635769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.636049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.636079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.636215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.636253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.636402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.636432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.636630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.636660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.636944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.636973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.637110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.637140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.637282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.637313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.637466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.637496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.637689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.637718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.637931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.637962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.638163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.638193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.638405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.638436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.638701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.638732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.638888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.638917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.639149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.639179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.639327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.639357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.639609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.639639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.639852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.639882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.640137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.640167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.640435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.640466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.640660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.640689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.640897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.640927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.641160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.641190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.641333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.641370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.641577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.641607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.641729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.641759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.641890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.641920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.642113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.642143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.642334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.642365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.642599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.642629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.642838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.642867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.643063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.643093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.643244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.643275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.643428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.643458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.643746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.643776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.643987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.644016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.644273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.644303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.644628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-07-15 13:02:24.644658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-07-15 13:02:24.644936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.644966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.645250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.645282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.645491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.645521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.645735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.645764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.645967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.645997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.646206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.646246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.646456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.646487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.646627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.646657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.646912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.646941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.647153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.647184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.647415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.647447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.647573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.647603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.647888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.647956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.648188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.648223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.648416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.648448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.648657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.648687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.648893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.648924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.649139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.649170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.649451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.649483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.649689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.649718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.649909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.649939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.650077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.650107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.650256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.650287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.650542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.650573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.650771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.650801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.651057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.651096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.651377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.651408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.651601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.651632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.651822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.651852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.651977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.652007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.652267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.652298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.652515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.652545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.652695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.652724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.652985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.653014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.653214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.653255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.653541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.653571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.653717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.653747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.653951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.653980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.654186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.654216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.654436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.654466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.654754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-07-15 13:02:24.654784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-07-15 13:02:24.655060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.655090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.655396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.655428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.655714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.655744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.655904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.655934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.656120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.656150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.656342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.656387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.656523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.656553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.656688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.656717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.656907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.656937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.657196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.657233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.657367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.657397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.657687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.657718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.657922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.657952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.658151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.658180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.658458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.658489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.658633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.658663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.658859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.658889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.659168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.659198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.659409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.659439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.659632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.659661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.659848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.659878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.660105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.660135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.660293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.660324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.660518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.660548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.660699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.660734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.660966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.660997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.661306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.661337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.661471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.661501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.661709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.661739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.661881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.661910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.662153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.662182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.662386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.662417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.662687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.662717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.662911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.662941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.663139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.663168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.663366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.663396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.663588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.663618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.663901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.663931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.664081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.664111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.664372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.664404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.664623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-07-15 13:02:24.664653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-07-15 13:02:24.664810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.664839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.664989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.665019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.665219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.665256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.665445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.665475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.665669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.665699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.665900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.665930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.666133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.666163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.666473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.666505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.666615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.666645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.666852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.666882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.667040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.667109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.667360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.667396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.667659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.667690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.667973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.668004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.668197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.668237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.668448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.668479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.668685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.668714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.668970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.669000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.669150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.669181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.669397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.669427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.669658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.669689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.669814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.669845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.670056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.670085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.670247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.670280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.670439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.670726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.670756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.670952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.670983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.671268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.671300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.671503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.671534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.671790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.671820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.672029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.672059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.672267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.672297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.672450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.672481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.672630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.672660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.672800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.672830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-07-15 13:02:24.673051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-07-15 13:02:24.673081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.673293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.673323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.673625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.673666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.673925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.673956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.674084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.674114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.674311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.674342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.674529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.674558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.674800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.674830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.675024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.675054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.675264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.675295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.675531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.675561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.675782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.675812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.676008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.676037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.676241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.676273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.676555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.676585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.676790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.676820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.677104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.677134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.677330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.677361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.677620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.677650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.677803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.677832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.678023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.678053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.678272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.678303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.678516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.678550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.678818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.678849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.678977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.679007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.679150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.679181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.679433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.679464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.679677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.679706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.679850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.679881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.680092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.680127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.680408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.680440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.680653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.680684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.680911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.680940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.681157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.681187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.681451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.681482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.681739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.681769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.681960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.681990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.682213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.682260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.682469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.682499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.682636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.682666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.682922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.682953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-07-15 13:02:24.683182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-07-15 13:02:24.683211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.683429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.683459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.683611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.683641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.683834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.684018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.684049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.684333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.684364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.684643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.684673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.684796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.684826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.685058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.685088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.685279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.685309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.685435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.685466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.685657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.685687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.685847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.685877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.686004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.686034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.686289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.686321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.686488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.686518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.686713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.686744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.686937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.686968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.687177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.687207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.687499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.687529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.687673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.687703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.687988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.688018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.688170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.688201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.688472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.688503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.688644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.688675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.688798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.688828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.689029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.689058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.689333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.689365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.689554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.689584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.689731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.689762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.689953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.689983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.690190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.690220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.690537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.690567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.690719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.690748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.691014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.691044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.691169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.691200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.691404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.691434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.691638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.691667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.691820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.691850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.692127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.692156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.692366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.692397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.692540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-07-15 13:02:24.692570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-07-15 13:02:24.692715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.692744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.692939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.692969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.693262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.693294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.693414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.693444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.693706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.693736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.694017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.694047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.694255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.694286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.694523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.694554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.694745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.694775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.694982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.695012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.695138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.695169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.695432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.695463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.695692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.695722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.695927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.695958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.696093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.696128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.696389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.696420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.696613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.696643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.696844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.696874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.697081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.697321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.697353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.697551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.697581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.697893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.697923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.698191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.698221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.698385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.698416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.698604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.698634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.698815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.698845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.699051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.699081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.699306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.699338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.699551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.699581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.699846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.699876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.700077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.700107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.700308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.700339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.700598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.700628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.700886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.700917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.701144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.701174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.701322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.701354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.701546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.701577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.701804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.701834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.702034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.702064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.702281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.702313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.702595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.702625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-07-15 13:02:24.702888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-07-15 13:02:24.702924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.703126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.703156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.703363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.703394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.703594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.703624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.703831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.703862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.704142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.704173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.704385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.704416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.704604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.704634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.704914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.704945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.705069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.705100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.705361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.705392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.705535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.705565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.705802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.705832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.705970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.706000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.706292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.706325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.706517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.706547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.706784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.706815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.706994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.707025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.707280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.707312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.707519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.707550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.707759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.707789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.707979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.708009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.708205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.708244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.708435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.708465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.708726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.708756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.708947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.708978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.709178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.709209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.709454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.709490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.709698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.709727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.709989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.710020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.710266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.710298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.710449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.710480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.710700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.710731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.710920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.710949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.711158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.711189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.711337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-07-15 13:02:24.711368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-07-15 13:02:24.711625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.711655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.711856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.711886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.712095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.712125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.712328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.712360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.712549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.712579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.712751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.712820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.712983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.713017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.713162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.713194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.713415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.713448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.713593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.713624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.713765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.713795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.713943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.713973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.714186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.714216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.714431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.714461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.714684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.714714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.714840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.714870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.715103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.715134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.715278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.715310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.715498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.715528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.715660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.715690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.715830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.715860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.716141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.716172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.716438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.716469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.716679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.716710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.716937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.716966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.717172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.717202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.717422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.717453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.717710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.717740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.717940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.717970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.718176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.718206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.718498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.718528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.718723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.718753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.719000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.719030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.719155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.719185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.719440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.719471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.719705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.719734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.719883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.719913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.720112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.720141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.720265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.720296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.720507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-07-15 13:02:24.720537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-07-15 13:02:24.720825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.720856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.721115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.721145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.721459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.721489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.721623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.721658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.721789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.721819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.721945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.721975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.722186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.722216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.722361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.722392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.722548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.722578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.722835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.722865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-07-15 13:02:24.723094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-07-15 13:02:24.723124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1875811 Killed "${NVMF_APP[@]}" "$@" 00:27:54.103 [2024-07-15 13:02:24.723387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.723419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 [2024-07-15 13:02:24.723655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.723687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 [2024-07-15 13:02:24.723832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.723862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:54.103 [2024-07-15 13:02:24.724125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.724156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:54.103 [2024-07-15 13:02:24.724418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.724449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.103 [2024-07-15 13:02:24.724686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.724717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.103 [2024-07-15 13:02:24.724902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.724934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 [2024-07-15 13:02:24.725081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.725111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 [2024-07-15 13:02:24.725242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.103 [2024-07-15 13:02:24.725274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.103 qpair failed and we were unable to recover it. 00:27:54.103 [2024-07-15 13:02:24.725410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.725441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.725653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.725685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.725892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.725923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.726075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.726105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.726258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.726291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.726487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.726517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.726782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.726811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.727066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.727095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.727287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.727318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.727608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.727637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.727787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.727822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.728043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.728073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.728208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.728249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.728396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.728425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.728619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.728650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.728869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.728898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.729092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.729123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.729342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.729371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.729605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.729634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.729893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.729923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.730180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.730209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.730458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.730488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.730696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.730726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.730952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.730981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.731175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.731205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.731362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.731393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.731652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.731682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1876751 00:27:54.104 [2024-07-15 13:02:24.731887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.731917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1876751 00:27:54.104 [2024-07-15 13:02:24.732176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:54.104 [2024-07-15 13:02:24.732206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.732436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.732467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1876751 ']' 00:27:54.104 [2024-07-15 13:02:24.732591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.732621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.104 [2024-07-15 13:02:24.732821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.732851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.104 [2024-07-15 13:02:24.733068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.733099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 [2024-07-15 13:02:24.733290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.104 [2024-07-15 13:02:24.733529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.733559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.104 [2024-07-15 13:02:24.733701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.733732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.104 13:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.104 [2024-07-15 13:02:24.733988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.104 [2024-07-15 13:02:24.734018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.104 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.734250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.734280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.734563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.734593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.734722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.734752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.734960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.734991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.735144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.735174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.735417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.735448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.735685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.735718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.735934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.735964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.736222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.736258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.736483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.736520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.736711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.736741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.736961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.736990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.737191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.737221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.737370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.737400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.737667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.737698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.737977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.738007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.738263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.738294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.738477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.738507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.738788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.738817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.739001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.739031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.739243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.739273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.739392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.739422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.739627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.739657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.739862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.739892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.740151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.740183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.740416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.740447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.740577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.740606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.740809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.740839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.741037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.741066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.741204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.741240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.741385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.741414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.741536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.741566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.741847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.741877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.742087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.742116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.742252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.742283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.742431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.742460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.742682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.742712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.742845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.742875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.743071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.743100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.743244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.743276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.743481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.105 [2024-07-15 13:02:24.743510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.105 qpair failed and we were unable to recover it. 00:27:54.105 [2024-07-15 13:02:24.743737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.743925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.743956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.744150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.744179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.744338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.744369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.744491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.744521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.744718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.744748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.744899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.744929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.745053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.745082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.745289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.745320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.745466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.745496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.745688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.745718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.745858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.745887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.746034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.746064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.746320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.746351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.746497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.746527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.746792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.746822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.747020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.747050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.747320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.747350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.747636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.747666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.747877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.747906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.748139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.748168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.748400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.748430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.748635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.748665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.748818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.748847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.748999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.749030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.749242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.749273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.749562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.749592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.749850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.749880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.750024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.750053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.750284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.750315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.750525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.750556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.750811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.750841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.751071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.751100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.751286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.751317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.751598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.751628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.751830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.751860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.752121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.752156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.752358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.752388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.752603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.752633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.752763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.752792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.106 [2024-07-15 13:02:24.752932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.106 [2024-07-15 13:02:24.752962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.106 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.753096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.753126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.753316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.753346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.753541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.753570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.753759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.753789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.753993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.754022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.754270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.754303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.754494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.754524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.754633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.754663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.754847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.754877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.755147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.755177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.755444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.755474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.755663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.755693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.755840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.755870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.756069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.756098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.756250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.756280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.756483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.756513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.756632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.756661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.756809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.756838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.757033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.757063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.757207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.757245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.757432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.757461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.757669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.757698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.757928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.757963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.758093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.758123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.758318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.758349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.758540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.758570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.758763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.758793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.758985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.759014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.759312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.759343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.759662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.759692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.759893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.759923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.760175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.760205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.760447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.760477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.760654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.760684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.760876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.760905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.761194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.107 [2024-07-15 13:02:24.761231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.107 qpair failed and we were unable to recover it. 00:27:54.107 [2024-07-15 13:02:24.761456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.761486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.761707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.761737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.762017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.762047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.762168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.762197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.762486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.762517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.762826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.762855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.763044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.763073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.763246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.763278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.763434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.763463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.763627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.763657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.763818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.763848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.764104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.764134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.764421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.764451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.764627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.764662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.764796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.764826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.765058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.765089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.765315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.765346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.765556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.765586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.765775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.765805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.766064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.766094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.766281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.766312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.766445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.766475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.766669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.766698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.766849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.766879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.767091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.767121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.767261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.767292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.767435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.767465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.767720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.767788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.768159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.768237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.768421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.768487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.768723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.768756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.768880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.768911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.769112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.769143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.769340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.769374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.769507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.769537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.769795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.769825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.769983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.770013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.770289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.108 [2024-07-15 13:02:24.770320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.108 qpair failed and we were unable to recover it. 00:27:54.108 [2024-07-15 13:02:24.770602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.770631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.770782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.770812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.770936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.770974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.771125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.771155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.771342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.771372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.771659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.771689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.771898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.771928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.772121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.772151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.772272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.772302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.772412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.772442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.772701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.772730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.772949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.772979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.773180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.773210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.773421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.773452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.773595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.773625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.773760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.773790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.774004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.774034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.774194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.774233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.774430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.774460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.774641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.774672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.774809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.774839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.774952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.774981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.775238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.775269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.775497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.775527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.775756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.775786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.775914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.775944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.776145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.776176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.776448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.776480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.776627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.776657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.776964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.777004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.777154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.777185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.777339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.777373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.777508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.777539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.777736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.777767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.778030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.778060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.778307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.778338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.778532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.778562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.778825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.778855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.778995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.779026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.779160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.779190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.109 [2024-07-15 13:02:24.779336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.109 [2024-07-15 13:02:24.779368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.109 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.779585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.779615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.779829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.779867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.780010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.780040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.780242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.780273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.780480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.780510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.780720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.780750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.780904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.780934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.781192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.781223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.781455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.781486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.781611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.781641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.781843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.781872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.782111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.782142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.782353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.782384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.782519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.782549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.782712] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:54.110 [2024-07-15 13:02:24.782753] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.110 [2024-07-15 13:02:24.782758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.782788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.783043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.783071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.783198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.783236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.783379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.783409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.783549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.783579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.783807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.783837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.784060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.784090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.784238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.784269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.784475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.784506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.784716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.784746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.784961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.784992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.785190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.785220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.785490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.785521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.785661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.785692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.785887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.785918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.786197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.786237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.786493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.786523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.786751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.786781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.786976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.787006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.787198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.787238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.787429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.787459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.787652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.787681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.787906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.787936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.788219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.788269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.788504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.110 [2024-07-15 13:02:24.788534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.110 qpair failed and we were unable to recover it. 00:27:54.110 [2024-07-15 13:02:24.788741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.788771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.789030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.789065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.789282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.789313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.789572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.789602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.789794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.789824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.790043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.790073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.790291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.790322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.790583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.790613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.790747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.790777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.790981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.791010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.791214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.791253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.791541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.791571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.791781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.791811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.792034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.792064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.792287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.792317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.792583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.792613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.792738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.792952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.792982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.793183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.793213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.793450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.793481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.793617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.793647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.793849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.793879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.794139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.794169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.794300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.794331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.794439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.794470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.794777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.794807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.795041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.795071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.795328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.795360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.795513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.795545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.795746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.795777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.795990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.796021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.796216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.796269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.796521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.796551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.796762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.796792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.797006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.797035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.797232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.797263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.797473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.797503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.797783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.797813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.798069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.798100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.798304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.798335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.798564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.111 [2024-07-15 13:02:24.798595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.111 qpair failed and we were unable to recover it. 00:27:54.111 [2024-07-15 13:02:24.798823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.798864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.799097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.799127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.799255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.799287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.799564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.799594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.799803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.799833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.800093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.800123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.800334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.800367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.800533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.800568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.800721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.800751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.801014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.801044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.801246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.801276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.801484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.801513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.801724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.801754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.802038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.802067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.802335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.802366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.802623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.802653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.802856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.802886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.803083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.803112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.803315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.803347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.803487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.803517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.803723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.803753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.803943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.803973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.804125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.804155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.804363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.804394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.804537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.804566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.804827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.804857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.805141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.805171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.805346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.805376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.805605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.805635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.805839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.805870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.806074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.806103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.806246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.806277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.806550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.806579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.806715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.806746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.112 [2024-07-15 13:02:24.807002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.112 [2024-07-15 13:02:24.807032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.112 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.807245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.807276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.807514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.807544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.807735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.807765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.807968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.807998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.808207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.808245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.808448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.808485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.808689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.808719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.808910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.808941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.809222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.809259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.809544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.809574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.809849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.809880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.810137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.810167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.810378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.810410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.810569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.810599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.810776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.810807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.811043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.811073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.811215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.811256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.811449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.811479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.811738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.811773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.811943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.811975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.812118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.812148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.812343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.812374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.812577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.812607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.812863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.812892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.813105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.813313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.813345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.813490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.813520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.813723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.813753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.814010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.814039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.814173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.814203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.814422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.814453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.814711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.814740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.815021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.815050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.815192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.815222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.815510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.815539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.815732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.815761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.815970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.815999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.816215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.816256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.816465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.816495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.816772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.816801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.113 qpair failed and we were unable to recover it. 00:27:54.113 [2024-07-15 13:02:24.816945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.113 [2024-07-15 13:02:24.816975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.817175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.817204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.817419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.817449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.817738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.817772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.817925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.817956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.818192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.818240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.818481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.818511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.818740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.818770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.819032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.819062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.819266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.819298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.819479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.819508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.819794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.819824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.820020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.820050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.820194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.820233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.820493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.820522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.820716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.820746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.820955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.820985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.821125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.821155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.821362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.821393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.821591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.821621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.821901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.821930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.822127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.822157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.822465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.822496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.822663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.822693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.822896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.822926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.823207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.823253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.823551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.823582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.823775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.823805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.824105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.824136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.824290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.824322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.824525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.824555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.824754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.824784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.824925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.824955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.825208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.825246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.825483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.825512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.825697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.825726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.825903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.825932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.826244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.826276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.826532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.826562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.826767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.826798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.827075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.114 [2024-07-15 13:02:24.827105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.114 qpair failed and we were unable to recover it. 00:27:54.114 [2024-07-15 13:02:24.827300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.827331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.827586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.827615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.827894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.827924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.828071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.828100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.828288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.828325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.828554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.828586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.828843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.828875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.829085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.829115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.829373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.829405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.829607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.829638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.829830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.829859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.830085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.830115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.830373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.830404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.830532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.830562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.830699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.830729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.830920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.830949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.831221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.831258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.831462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.831492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.831784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.831814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.832092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.832123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.832379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.832412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.832620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.832649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.832858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.832888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.833172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.833201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.833414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.833444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.833669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.833699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.833845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.833874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.834139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.834171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.834373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.834404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.834628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.834658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.834889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.834918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.835080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.835111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.835372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.835403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.835622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.835652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.835809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.835840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.836047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.836077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.836291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.836322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.836533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.836562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.836705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.836735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.837015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.837044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.115 [2024-07-15 13:02:24.837253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.115 [2024-07-15 13:02:24.837284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.115 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.837485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.837516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.837796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.837826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.838050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.838079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.838335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.838371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.838630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.838659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.838933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.838962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.839241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.839272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.839550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.839579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.839809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.839840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.839982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.840012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.840264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.840296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.840430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.840460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.840735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.840766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.840978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.841008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.841266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.841298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.841577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.841608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.841807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.841836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.842111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.842141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.842350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.842383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.842635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.842664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.842802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.842833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.843107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.843138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.843290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.843322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.843607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.843638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.843832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.843861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.844015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.844046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.844173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.844203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.844441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.844472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.844664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.844694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.844882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.844911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.845058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.845089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.845281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.845313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.845465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.845495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.845673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.845704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.116 [2024-07-15 13:02:24.845907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.116 [2024-07-15 13:02:24.845938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.116 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.846156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.846187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.846427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.846458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.846731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.846761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.847046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.847075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.847245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.847277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.847487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.847517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.847719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.847750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.847989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.848019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.848235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.848271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.848533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.848563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.848784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.848814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.849071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.849100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.849238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.849270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.849549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.849579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.849787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.849818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.849961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.849991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.850190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.850220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.850353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.850384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.850522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.850551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.850830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.850861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.851083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.851113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.851375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.851406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.851693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.851724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.851823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.117 [2024-07-15 13:02:24.851963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.851994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.852197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.852249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.852477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.852508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.852722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.852753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.852898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.852929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.853119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.853149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.853345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.853377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.853586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.853618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.853745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.853775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.854031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.854061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.854209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.854248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.854510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.854541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.854740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.854771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.854919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.854949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.855204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.855245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.855460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.855491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.117 [2024-07-15 13:02:24.855748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.117 [2024-07-15 13:02:24.855778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.117 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.856038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.856068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.856193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.856223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.856493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.856524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.856728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.856758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.856882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.856912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.857123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.857154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.857461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.857493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.857707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.857736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.857944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.857975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.858196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.858235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.858516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.858548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.858808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.858838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.859047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.859079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.859267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.859298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.859500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.859531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.859823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.859854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.860053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.860083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.860297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.860329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.860625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.860656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.860868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.860900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.861131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.861163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.861354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.861393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.861627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.861658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.861873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.861903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.862128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.862158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.862356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.862387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.862605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.862635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.862781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.862811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.863012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.863042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.863187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.863218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.863487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.863517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.863721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.863751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.863979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.864009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.864151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.864181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.864456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.864488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.864648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.864679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.864820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.864850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.865004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.865035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.865243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.865275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.865472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.118 [2024-07-15 13:02:24.865503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.118 qpair failed and we were unable to recover it. 00:27:54.118 [2024-07-15 13:02:24.865642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.865672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.865878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.865908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.866047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.866078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.866367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.866398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.866619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.866649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.866949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.866979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.867174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.867205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.867416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.867446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.867663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.867694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.867887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.867919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.868124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.868154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.868414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.868446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.868577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.868607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.868818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.868849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.869169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.869200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.869405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.869436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.869578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.869608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.869819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.869849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.870106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.870136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.870411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.870443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.870657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.870688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.870829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.870864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.871076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.871106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.871365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.871396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.871538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.871569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.871835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.871865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.872009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.872039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.872322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.872352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.872605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.872636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.872918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.872948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.873202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.873241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.873453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.873484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.873708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.873738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.873996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.874026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.874181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.874211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.874485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.874517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.874641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.874672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.874934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.874963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.875243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.875275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.875530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.875560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.875826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-07-15 13:02:24.875857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.119 qpair failed and we were unable to recover it. 00:27:54.119 [2024-07-15 13:02:24.876116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.876146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.876302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.876335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.876533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.876563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.876842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.876872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.877105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.877137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.877396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.877427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.877705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.877735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.878001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.878032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.878310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.878342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.878579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.878610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.878817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.878848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.879002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.879032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.879314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.879344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.879604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.879635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.879759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.879789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.879977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.880007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.880295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.880327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.880440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.880470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.880735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.880765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.880919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.880950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.881155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.881190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.881447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.881479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.881705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.881735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.881947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.881977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.882120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.882151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.882410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.882440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.882717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.882747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.882901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.882931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.883213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.883252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.883453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.883484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.883639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.883670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.883875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.883905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.884132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.884163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.884359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.884390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.120 [2024-07-15 13:02:24.884667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-07-15 13:02:24.884697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.120 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.884959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.884989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.885195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.885233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.885492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.885523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.885737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.885766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.885916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.885946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.886155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.886186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.886340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.886371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.886519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.886550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.886736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.886768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.886916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.886947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.887237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.887271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.887477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.887511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.887793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.887873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.888101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.888174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.888428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.888464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.888604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.888636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.888906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.888939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.889066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.889099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.889361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.889393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.889526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.889557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.889835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.889865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.890055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.890086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.890223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.890265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.890472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.890503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.890641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.890673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.890932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.890973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.891175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.891207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.891485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.891517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.891672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.891703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.891914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.891945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.892200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.892242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.892448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.892480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.892680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.892711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.892901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.892934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.893085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.893116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.893326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.893359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.893556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.893587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.893793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.893825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.894020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.894052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.894202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.894241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.121 qpair failed and we were unable to recover it. 00:27:54.121 [2024-07-15 13:02:24.894455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-07-15 13:02:24.894486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.894705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.894737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.894879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.894910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.895098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.895130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.895389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.895422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.895616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.895649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.895860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.895891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.896151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.896183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.896452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.896484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.896626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.896658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.896873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.896903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.897119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.897150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.897394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.897438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.897636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.897666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.897898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.897928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.898186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.898216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.898482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.898512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.898706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.898736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.899021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.899051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.899243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.899273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.899533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.899563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.899716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.899746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.900030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.900059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.900319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.900351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.900486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.900516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.900671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.900708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.900968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.900999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.901202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.901242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.901389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.901418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.901622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.901652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.901792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.901823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.902016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.902045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.902248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.902279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.902471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.902501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.902636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.902666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.902961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.902990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.903181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.903377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.903408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.903633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.903663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.903920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.903950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.904092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.122 [2024-07-15 13:02:24.904122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.122 qpair failed and we were unable to recover it. 00:27:54.122 [2024-07-15 13:02:24.904332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.904363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.904577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.904607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.904910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.904939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.905217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.905253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.905460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.905490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.905643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.905673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.905954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.905984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.906192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.906222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.906501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.906531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.906801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.906831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.907037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.907067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.907222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.907263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.907526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.907556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.907683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.907713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.907905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.907935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.908216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.908262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.908388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.908419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.908564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.908595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.908853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.908884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.909166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.909197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.909344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.909375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.909643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.909673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.909877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.909907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.910116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.910146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.910352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.910392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.910599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.910629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.910778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.910809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.911015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.911045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.911200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.911240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.911524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.911554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.911698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.911729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.911917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.911947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.912206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.912251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.912446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.912477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.912679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.912710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.912981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.913011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.913280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.913312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.913598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.913628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.913877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.913908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.914187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.914218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.914421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.123 [2024-07-15 13:02:24.914452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.123 qpair failed and we were unable to recover it. 00:27:54.123 [2024-07-15 13:02:24.914725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.914755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.915045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.915075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.915366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.915399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.915659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.915690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.916001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.916032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.916255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.916287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.916498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.916528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.916737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.916767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.917048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.917078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.917384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.917416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.917688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.917723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.918008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.918038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.918332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.918363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.918650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.918679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.918968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.918998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.919297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.919328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.919582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.919611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.919823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.919853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.920135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.920165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.920403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.920434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.920642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.920672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.920957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.920987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.921293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.921324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.921606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.921641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.921935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.921968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.922285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.922325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.922638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.922668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.922971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.923000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.923284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.923314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.923574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.923604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.923847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.923877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.924113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.924146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.924367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.924402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.924682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.924716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.924981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.925013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.925326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.925361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.925625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.925655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.925755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.124 [2024-07-15 13:02:24.925786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.124 [2024-07-15 13:02:24.925794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.124 [2024-07-15 13:02:24.925801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.124 [2024-07-15 13:02:24.925806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.124 [2024-07-15 13:02:24.925958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.124 [2024-07-15 13:02:24.925988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.124 [2024-07-15 13:02:24.925919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:54.124 qpair failed and we were unable to recover it. 00:27:54.124 [2024-07-15 13:02:24.926026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:54.124 [2024-07-15 13:02:24.926131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:54.125 [2024-07-15 13:02:24.926220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.926133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:54.125 [2024-07-15 13:02:24.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.926489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.926519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.926732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.926763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.927044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.927074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.927377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.927409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.927611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.927642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.927930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.927961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.928218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.928571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.928601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.928865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.928896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.929153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.929183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.929477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.929507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.929717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.929747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.930036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.930066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.930347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.930377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.930680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.930710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.931019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.931049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.931329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.931361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.931649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.931680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.931938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.931968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.932294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.932326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.932601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.932630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.932823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.932858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.933121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.933152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.933361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.933392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.933617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.933647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.933932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.933962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.934121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.934151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.934435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.934466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.934728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.934759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.934992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.935022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.935214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.935255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.935542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.935573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.935853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.125 [2024-07-15 13:02:24.935885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.125 qpair failed and we were unable to recover it. 00:27:54.125 [2024-07-15 13:02:24.936112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.936143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.936368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.936399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.936640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.936671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.936869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.936898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.937106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.937136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.937416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.937447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.937749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.937779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.938060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.938091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.938378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.938410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.938663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.938694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.938910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.938941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.939169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.939200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.939421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.939452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.939694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.939725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.939982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.940012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.940294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.940326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.940624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.940656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.940847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.940877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.941132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.941162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.941420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.941452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.941735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.941765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.942068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.942100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.942384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.942416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.942674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.942705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.942964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.942994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.943252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.943285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.943508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.943540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.943733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.943765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.943953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.943991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.944197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.944239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.944512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.944544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.944843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.944875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.945157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.945188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.945388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.945419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.945722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.945753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.946011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.946042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.946248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.946281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.946432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.946464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.946738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.946769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.947052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.947084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.126 [2024-07-15 13:02:24.947385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.126 [2024-07-15 13:02:24.947417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.126 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.947702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.947733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.947898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.947929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.948050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.948081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.948353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.948384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.948655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.948685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.948953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.948983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.949257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.949289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.949595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.949626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.949921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.949951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.950155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.950186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.950511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.950544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.950824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.950854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.951046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.951077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.951345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.951376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.951590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.951621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.951880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.951912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.952183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.952215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.952513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.952543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.952800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.952831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.953152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.953183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.953447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.953479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.953759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.953790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.954094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.954125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.954404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.954436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.954572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.954602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.954884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.954915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.955171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.955202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.955454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.955491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.955686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.955716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.955926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.955956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.956166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.956197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.956566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.956653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.956960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.956993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.957280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.957314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.957603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.957634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.957919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.957949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.958159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.958189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.958480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.958511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.958669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.958699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.958958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.127 [2024-07-15 13:02:24.958987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.127 qpair failed and we were unable to recover it. 00:27:54.127 [2024-07-15 13:02:24.959186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.959216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.959516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.959547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.959769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.959800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.959959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.959989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.960210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.960251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.960453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.960484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.960699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.960729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.961036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.961067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.961340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.961374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.961647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.961682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.961888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.961920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.962184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.962220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.962490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.962523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.962829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.962862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.963156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.963203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.963408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.963439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.963643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.963673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.963823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.963853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.964157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.964188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.964465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.964496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.964786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.964816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.965109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.965139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.965340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.965371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.965560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.965590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.965812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.965843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.966048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.966078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.966272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.966304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.966582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.966613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.966880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.966916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.967212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.967251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.967410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.967441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.967652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.967681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.967962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.967992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.968296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.968330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.968637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.968669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.968897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.968928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.969209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.969250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.969508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.969539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.969758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.969788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.970071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.970101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.970351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.970383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.128 [2024-07-15 13:02:24.970662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.128 [2024-07-15 13:02:24.970693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.128 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.970890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.970921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.971184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.971214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.971499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.971530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.971744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.971775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.971910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.971940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.972130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.972161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.972371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.972402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.972677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.972708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.973010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.973041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.973326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.973357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.973644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.973675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.973960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.973991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.974276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.974310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.974603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.974634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.974918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.974949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.975233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.975266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.975555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.975587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.975872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.975904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.976193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.976231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.976456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.976487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.976760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.976791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.976983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.977014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.977161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.977191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.977507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.977540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.977759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.977790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.977998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.978029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.978313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.978347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.978644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.978678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.978886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.978919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.979148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.979180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.979344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.979378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.979663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.979694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.979982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.980013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.980273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.980305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.980585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.129 [2024-07-15 13:02:24.980616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.129 qpair failed and we were unable to recover it. 00:27:54.129 [2024-07-15 13:02:24.980919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.980951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.981238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.981270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.981562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.981593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.981785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.981816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.982092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.982123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.982392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.982430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.982657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.982688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.982972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.983001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.983252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.983284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.983570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.983600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.983873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.983903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.984162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.984193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.984425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.984456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.984685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.984715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.985017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.985334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.985365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.985573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.985603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.985804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.985834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.986115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.986145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.986306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.986338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.986609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.986638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.986939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.986969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.987185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.987215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.987436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.987466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.987769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.987800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.988005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.988035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.988323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.988354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.988640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.988670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.988953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.988984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.989189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.989219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.989484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.989515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.989821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.989852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.990022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.990057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.990320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.990352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.990640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.990670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.990966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.990996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.991284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.991315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.991602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.991632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.991852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.991883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.992168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.992198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.992608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.130 [2024-07-15 13:02:24.992694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.130 qpair failed and we were unable to recover it. 00:27:54.130 [2024-07-15 13:02:24.992987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.993056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.993438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.993477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.993722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.993754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.993965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.993996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.994257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.994289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.994568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.994598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.994898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.994929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.995160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.995191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.995471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.995503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.995699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.995729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.995997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.996028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.996237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.996269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.996528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.996558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.996861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.996891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.997176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.997207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.997507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.997537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.997820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.997850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.998048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.998077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.998369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.998401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.998669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.998699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.998900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.998930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.999212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.999251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.999455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.999484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.999767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:24.999796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:24.999998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.000028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.000241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.000272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.000477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.000507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.000714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.000743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.000962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.000992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.001253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.001284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.001440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.001469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.001752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.001782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.002056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.002086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.002347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.002378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.002654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.002684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.002940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.002970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.003236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.003267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.003475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.003505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.003694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.003724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.004004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.004033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.131 [2024-07-15 13:02:25.004275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.131 [2024-07-15 13:02:25.004306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.131 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.004529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.004559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.004760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.004789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.005092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.005122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.005424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.005453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.005686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.005717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.006023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.006053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.006281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.006313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.006517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.006547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.006804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.006833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.007119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.007383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.007413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.007628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.007658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.007938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.007967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.008222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.008260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.008556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.008586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.008870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.008900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.009111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.009141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.009343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.009378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.009653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.009682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.009985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.010015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.010221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.010263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.010472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.010502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.010784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.010814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.010968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.010997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.011151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.011180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.011410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.011441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.011669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.011956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.011985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.012248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.012279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.012484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.012514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.012704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.012734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.012998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.013028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.013336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.013366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.013513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.013543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.013825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.013855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.014156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.014185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.014502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.014533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.014779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.014810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.015019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.015049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.015308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.015339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.132 [2024-07-15 13:02:25.015622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.132 [2024-07-15 13:02:25.015651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.132 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.015977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.016007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.016273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.016303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.016520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.016550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.016818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.016848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.017052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.017082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.017365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.017395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.017674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.017704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.018013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.018047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.018319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.018351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.018655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.018686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.018962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.018992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.019182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.019212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.019484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.019514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.019722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.019751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.020015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.020045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.020269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.020301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.020522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.020557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.020795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.020825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.021016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.021046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.021251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.021282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.021489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.021519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.021799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.021829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.022136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.022166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.022324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.022558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.022588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.022866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.022896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.023189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.023219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.023443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.023473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.023757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.023786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.023988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.024018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.024242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.024273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.024580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.024610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.024879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.024909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.025173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.025203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.133 [2024-07-15 13:02:25.025405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.133 [2024-07-15 13:02:25.025436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.133 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.025707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.025736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.026037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.026068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.026328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.026360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.026562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.026592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.026794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.026824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.027014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.027044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.027260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.027290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.027575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.027605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.027756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.027786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.027985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.028015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.028205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.028243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.028471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.028501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.028726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.028757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.028947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.028977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.029254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.029284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.029490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.029520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.029821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.029852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.030000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.030030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.030262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.030294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.030501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.030532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.030789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.030819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.031098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.031133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.031363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.031395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.031672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.031703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.031959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.031989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.032186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.032216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.032503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.032534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.032794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.032824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.033054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.033084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.033302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.033332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.033586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.033616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.033875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.033905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.034164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.034194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.034427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.034458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.034740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.034770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.034919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.034949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.035114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.035144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.035352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.035383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.035644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.035674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.035938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.035968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.134 [2024-07-15 13:02:25.036241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.134 [2024-07-15 13:02:25.036272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.134 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.036541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.036571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.036812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.036842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.037124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.037154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.037312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.037343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.037505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.037536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.037741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.037770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.038050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.038079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.038248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.038279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.038541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.038571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.038829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.038859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.039114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.039143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.135 [2024-07-15 13:02:25.039411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.135 [2024-07-15 13:02:25.039442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.135 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.039677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.039708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.040001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.040032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.040314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.040345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.040576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.040605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.040859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.040889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.041117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.041147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.041405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.041436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.041671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.041701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.041938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.041974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.042267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.042298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.042497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.042527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.042730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.042760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.043018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.043049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.043324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.043358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.043620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.043650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.043931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.043961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.044285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.044316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.044576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.044607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.044905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.044935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.045200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.045237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.045529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.045559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.045851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.045881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.412 [2024-07-15 13:02:25.046173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.412 [2024-07-15 13:02:25.046203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.412 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.046494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.046524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.046717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.046747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.047010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.047040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.047256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.047288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.047498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.047527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.047720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.047750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.048009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.048040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.048349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.048380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.048651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.048681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.048889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.048919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.049179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.049208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.049486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.049517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.049824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.049854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.050137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.050167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.050420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.050452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.050785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.050816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.051074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.051104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.051374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.051405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.051663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.051693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.051921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.051951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.052153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.052183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.052452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.052483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.052638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.052668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.052868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.052898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.053043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.053073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.053265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.053302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.053531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.053561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.053870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.053899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.054092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.054122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.054327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.054643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.054673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.054982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.055012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.055223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.055272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.055503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.055534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.055814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.055844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.056033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.056062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.056372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.056404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.056675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.056705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.413 [2024-07-15 13:02:25.057003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.413 [2024-07-15 13:02:25.057033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.413 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.057295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.057326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.057521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.057550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.057818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.057848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.058049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.058080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.058289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.058320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.058589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.058619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.058854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.058884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.059112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.059141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.059362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.059393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.059649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.059679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.059996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.060026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.060315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.060346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.060577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.060607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.060808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.060838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.061145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.061176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.061386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.061416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.061694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.061724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.062005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.062035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.062297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.062327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.062606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.062636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.062944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.062974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.063179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.063209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.063436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.063466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.063725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.063755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.064023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.064053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.064358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.064390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.064597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.064633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.064912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.064942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.065146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.065176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.065462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.065494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.065763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.065793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.065933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.065963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.066220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.066258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.066557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.066587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.066869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.066899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.067105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.414 [2024-07-15 13:02:25.067136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.414 qpair failed and we were unable to recover it. 00:27:54.414 [2024-07-15 13:02:25.067439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.067471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.067753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.067783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.068070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.068100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.068308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.068342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.068635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.068665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.068903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.068933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.069152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.069182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.069474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.069504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.069734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.069764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.070044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.070074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.070286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.070316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.070574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.070603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.070912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.070942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.071177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.071207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.071491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.071521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.071805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.071835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.072131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.072161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.072480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.072511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.072806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.072836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.073123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.073154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.073438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.073469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.073753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.073783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.074037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.074067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.074352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.074383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.074684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.074714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.075001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.075032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.075263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.075294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.075570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.075599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.075892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.075922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.076210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.076246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.076445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.076481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.076687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.076717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.076926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.076956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.077111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.077142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.077422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.077453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.077720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.077750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.078050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.078081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.078364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.078394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.078696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.078727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.078932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.078962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.415 [2024-07-15 13:02:25.079257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.415 [2024-07-15 13:02:25.079288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.415 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.079506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.079536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.079818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.079849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.080103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.080134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.080440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.080470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.080755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.080785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.080989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.081019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.081326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.081358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.081635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.081665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.081975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.082005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.082237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.082269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.082481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.082511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.082791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.082821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.083114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.083144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.083282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.083313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.083568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.083598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.083870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.083900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.084174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.084265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.084507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.084541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.084816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.084848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.085136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.085166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.085382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.085413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.085634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.085665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.085933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.085962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.086119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.086149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.086407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.086439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.086722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.086751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.087074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.087104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.087311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.087343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.087634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.087665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.416 [2024-07-15 13:02:25.087821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.416 [2024-07-15 13:02:25.087860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.416 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.088163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.088193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.088328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.088359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.088570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.088599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.088789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.088819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.089086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.089116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.089343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.089374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.089612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.089642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.089772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.089802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.089991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.090022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.090233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.090264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.090544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.090575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.090877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.090907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.091187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.091217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.091550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.091581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.091779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.091809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.092064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.092094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.092306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.092339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.092611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.092640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.092898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.092929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.093209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.093247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.093471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.093501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.093721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.093750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.093977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.094007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.094242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.094272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.094463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.094493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.094703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.094733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.094962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.094997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.095270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.095301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.095572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.095601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.095828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.095857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.096066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.096096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.096354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.096385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.096665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.096695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.096992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.097022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.097310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.097625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.097655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.097811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.097840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.098096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.098126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.417 [2024-07-15 13:02:25.098406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.417 [2024-07-15 13:02:25.098435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.417 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.098712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.098748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.098979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.099009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.099265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.099296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.099580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.099610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.099854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.099884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.100079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.100110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.100317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.100348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.100560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.100589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.100807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.100837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.101092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.101123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.101333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.101364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.101574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.101604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.101826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.101855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.102113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.102143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.102428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.102459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.102760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.102790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.103075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.103104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.103362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.103392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.103643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.103674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.103934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.103964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.104168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.104208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.104444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.104475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.104730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.104760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.105022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.105052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.105309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.105340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.105648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.105677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.105956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.105986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.106196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.106236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.106526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.106556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.106840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.106870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.107150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.107180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.107327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.107358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.107549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.107579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.107726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.107756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.108038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.108068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.108320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.108350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.108556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.108586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.418 qpair failed and we were unable to recover it. 00:27:54.418 [2024-07-15 13:02:25.108868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.418 [2024-07-15 13:02:25.108897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.109192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.109222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.109484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.109514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.109832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.109867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.110129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.110159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.110369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.110400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.110666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.110696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.110957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.110987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.111200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.111248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.111532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.111562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.111822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.111852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.112174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.112204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.112496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.112527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.112787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.112817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.113135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.113165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.113419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.113450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.113768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.113797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.114067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.114096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.114307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.114338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.114538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.114568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.114845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.114874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.115131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.115161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.115419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.115450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.115734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.115764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.116084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.116114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.116330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.116361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.116667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.116696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.116976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.117006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.117312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.117343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.117577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.117606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.117950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.117985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.118268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.118303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.118595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.118625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.118908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.118938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.119163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.119193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.119479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.119509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.119826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.119856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.120079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.120109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.120387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.120418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.120713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.120743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.120911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.419 [2024-07-15 13:02:25.120941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.419 qpair failed and we were unable to recover it. 00:27:54.419 [2024-07-15 13:02:25.121158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.121188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.121457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.121488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.121678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.121714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.121974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.122004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.122243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.122274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.122555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.122584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.122839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.122869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.123077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.123106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.123380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.123412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.123669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.123698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.123982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.124012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.124239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.124270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.124428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.124458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.124735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.124765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.125056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.125085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.125341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.125372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.125656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.125686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.125970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.125999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.126287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.126317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.126523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.126554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.126755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.126784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.127012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.127042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.127245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.127276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.127470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.127500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.127783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.127813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.128097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.128127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.128413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.128444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.128646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.128676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.128957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.128986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.129235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.129268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.129503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.129532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.129798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.129828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.130115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.130144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.130438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.130470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.130664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.130693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.130963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.130992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.131193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.131223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.131515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.131545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.131871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.131901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.132106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.132136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.420 [2024-07-15 13:02:25.132415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.420 [2024-07-15 13:02:25.132446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.420 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.132695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.132725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.132989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.133024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.133231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.133262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.133480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.133509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.133764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.133794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.134053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.134083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.134339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.134370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.134652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.134682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.134986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.135015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.135222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.135260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.135565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.135595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.135819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.135849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.136135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.136165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.136395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.136426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.136618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.136647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.136911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.136941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.137246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.137277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.137500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.137530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.137794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.137823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.138095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.138125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.138385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.138416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.138678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.138707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.139011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.139041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.139269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.139300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.139585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.139615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.139901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.139930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.140074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.140103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.140386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.140416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.140755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.140788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.141102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.141303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.141334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.141549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.141579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.141847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.421 [2024-07-15 13:02:25.141877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.421 qpair failed and we were unable to recover it. 00:27:54.421 [2024-07-15 13:02:25.142108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.142138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.142419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.142449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.142652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.142682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.142990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.143020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.143296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.143327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.143585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.143615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.143830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.143859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.144127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.144157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.144455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.144486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.144793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.144823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.145102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.145132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.145412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.145443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.145711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.145740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.146041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.146071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.146335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.146366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.146659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.146689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.146898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.146928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.147137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.147166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.147450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.147481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.147771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.147800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.148092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.148122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.148379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.148410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.148636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.148667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.148964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.148993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.149279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.149310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.149596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.149626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.149884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.149914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.150238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.150269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.150558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.150588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.150879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.150909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.151194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.151223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.151475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.151505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.151763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.151793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.151940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.151970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.152247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.152279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.152477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.152735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.152765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.153073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.153103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.153381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.153412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.153614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.422 [2024-07-15 13:02:25.153644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.422 qpair failed and we were unable to recover it. 00:27:54.422 [2024-07-15 13:02:25.153853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.153883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.154093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.154123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.154378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.154408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.154564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.154594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.154869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.154899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.155117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.155146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.155431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.155461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.155720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.155749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.156032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.156062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.156326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.156357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.156574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.156603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.156820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.156850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.157126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.157156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.157460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.157490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.157776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.157806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.158091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.158121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.158330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.158361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.158562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.158592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.158852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.158882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.159186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.159216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.159498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.159529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.159719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.159749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.160025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.160055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.160354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.160386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.160669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.160699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.160890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.160920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.161206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.161243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.161450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.161480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.161638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.161668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.161902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.161931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.162213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.162250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.162456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.162486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.162771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.162800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.163026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.163056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.163214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.163264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.163460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.163495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.163782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.163813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.164089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.164118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.164376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.164407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.164716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.164746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.423 qpair failed and we were unable to recover it. 00:27:54.423 [2024-07-15 13:02:25.165019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.423 [2024-07-15 13:02:25.165049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.165266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.165297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.165578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.165608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.165821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.165851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.166049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.166079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.166359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.166390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.166614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.166644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.166929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.166959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.167118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.167148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.167438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.167468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.167760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.167791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.168074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.168104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.168391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.168427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.168686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.168716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.168988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.169018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.169324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.169355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.169630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.169660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.169963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.169993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.170277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.170307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.170563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.170593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.170788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.170817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.171022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.171052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.171342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.171373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.171669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.171699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.171982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.172012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.172312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.172343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.172632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.172661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.172945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.172975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.173274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.173305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.173495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.173525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.173782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.173812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.174092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.174122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.174380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.174411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.174718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.174748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.175026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.175055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.175257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.175294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.175504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.175535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.175790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.175820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.176090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.176119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.424 [2024-07-15 13:02:25.176427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.424 [2024-07-15 13:02:25.176459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.424 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.176651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.176681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.176890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.176920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.177129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.177158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.177377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.177408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.177641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.177671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.177949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.177978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.178115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.178144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.178425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.178456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.178645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.178675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.178886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.178916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.179055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.179084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.179362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.179393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.179586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.179616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.179762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.179792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.180047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.180077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.180287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.180318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.180586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.180616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.180922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.180952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.181235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.181266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.181551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.181580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.181876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.181906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.182188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.182219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.182359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.182389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.182670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.182700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.182826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.182856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.183140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.183169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.183384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.183415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.183697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.183727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.183981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.184010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.184219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.184269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.184531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.184561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.184762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.184792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.185071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.185100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.425 qpair failed and we were unable to recover it. 00:27:54.425 [2024-07-15 13:02:25.185357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.425 [2024-07-15 13:02:25.185388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.185623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.185653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.185852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.185887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.186148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.186177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.186391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.186422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.186613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.186642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.186924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.186953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.187253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.187285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.187570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.187600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.187901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.187930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.188154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.188184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.188460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.188491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.188700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.188729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.188923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.188953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.189208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.189246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.189442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.189472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.189754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.189784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.190090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.190120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.190348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.190379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.190572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.190601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.190805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.190835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.191115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.191145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.191381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.191411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.191623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.191654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.191860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.191890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.192171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.192200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.192442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.192471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.192680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.192710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.192907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.192936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.193258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.193289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.193494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.193525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.193804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.193834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.194089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.194119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.194329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.194360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.194640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.194670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.194953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.194983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.195252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.195283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.195482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.195512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.195786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.195815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.196107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.196136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.196368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.426 [2024-07-15 13:02:25.196398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.426 qpair failed and we were unable to recover it. 00:27:54.426 [2024-07-15 13:02:25.196705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.196735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.196932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.196967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.197197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.197235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.197449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.197479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.197685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.197714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.198015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.198044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.198267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.198298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.198507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.198537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.198748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.198778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.198969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.198998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.199270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.199302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.199572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.199602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.199902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.199931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.200216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.200255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.200513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.200543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.200809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.200839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.201035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.201064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.201347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.201378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.201604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.201634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.201915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.201944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.202249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.202280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.202485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.202515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.202725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.202754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.203011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.203041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.203317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.203348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.203611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.203640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.203900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.203930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.204150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.204180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.204450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.204482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.204708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.204738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.205017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.205047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.205308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.205338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.205597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.205627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.205819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.205848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.206049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.206079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.206360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.206392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.206720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.206750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.207032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.207062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.207267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.207297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.207573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.207604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.427 qpair failed and we were unable to recover it. 00:27:54.427 [2024-07-15 13:02:25.207860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.427 [2024-07-15 13:02:25.207890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.208093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.208128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.208408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.208439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.208630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.208660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.208929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.208958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.209154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.209183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.209476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.209507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.209763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.209793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.210072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.210102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.210403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.210435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.210720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.210749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.210939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.210968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.211253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.211284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.211551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.211580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.211852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.211882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.212042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.212073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.212266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.212297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.212608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.212638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.212911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.212941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.213134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.213163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.213439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.213469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.213768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.213798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.214046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.214076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.214299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.214330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.214614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.214644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.214900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.214930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.215214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.215252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.215546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.215576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.215865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.215894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.216191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.216220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.216528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.216558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.216764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.216793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.217056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.217086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.217361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.217392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.217672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.217702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.218001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.218031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.218287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.218318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.218530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.218563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.218813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.219122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.219152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.219458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.428 [2024-07-15 13:02:25.219489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.428 qpair failed and we were unable to recover it. 00:27:54.428 [2024-07-15 13:02:25.219696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.219732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.219940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.219970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.220274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.220305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.220603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.220633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.220849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.220879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.221135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.221164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.221368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.221399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.221675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.221705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.221916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.221945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.222239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.222270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.222576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.222606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.222912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.222941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.223232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.223264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.223545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.223575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.223793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.223823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.223979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.224009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.224298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.224329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.224524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.224553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.224814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.224845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.225001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.225031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.225253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.225284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.225478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.225509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.225717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.225747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.225971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.226000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.226285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.226316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.226533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.226563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.226798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.226828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.227171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.227252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.227482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.227516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.227805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.227836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.228058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.228089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.228372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.228404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.228666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.228696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.229007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.229037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.229310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.229341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.229600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.229630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.229789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.229818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.230072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.230102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.230380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.230412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.429 [2024-07-15 13:02:25.230671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.429 [2024-07-15 13:02:25.230701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.429 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.231002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.231047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.231337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.231368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.231524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.231554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.231765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.231795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.232088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.232118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.232327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.232358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.232564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.232595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.232853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.233159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.233189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.233394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.233426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.233652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.233682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.233949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.233980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.234291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.234322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.234597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.234627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.234924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.234955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.235240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.235271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.235491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.235522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.235749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.235780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.236010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.236040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.236341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.236372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.236660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.236691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.236893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.236923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.237207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.237245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.237531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.237562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.237819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.237848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.238171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.238201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.238493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.238523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.238816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.238847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.239062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.239092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.239289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.239321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.239578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.239608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.239823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.239853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.240120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.240151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.240408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.240439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.240718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.240748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.430 [2024-07-15 13:02:25.241032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.430 [2024-07-15 13:02:25.241063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.430 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.241274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.241305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.241597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.241627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.241799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.241829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.242116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.242146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.242431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.242467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.242726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.242757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.243056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.243086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.243352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.243384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.243654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.243684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.243916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.243946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.244237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.244268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.244482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.244513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.244798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.244829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.245104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.245134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.245441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.245472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.245751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.245782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.245989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.246020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.246278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.246309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.246504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.246534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.246820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.246850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.247051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.247081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.247389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.247421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.247698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.247729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.248035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.248066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.248342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.248374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.248679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.248709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.248986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.249017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.249233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.249265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.249458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.249488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.249772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.249802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.250017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.250048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.250418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.250488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.250794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.250828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.251051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.251082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.251277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.251310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.251464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.251495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.251778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.251808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.252051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.252082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.252348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.252379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.252684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.431 [2024-07-15 13:02:25.252714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.431 qpair failed and we were unable to recover it. 00:27:54.431 [2024-07-15 13:02:25.252995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.253025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.253237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.253268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.253551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.253581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.253866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.253897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.254182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.254221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.254437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.254468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.254753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.254783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.255037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.255067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.255278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.255310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.255582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.255612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.255845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.255875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.256163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.256193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.256461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.256492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.256797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.256827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.257110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.257139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.257441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.257472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.257673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.257703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.257990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.258019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.258281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.258312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.258588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.258618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.258812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.258842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.259050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.259080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.259281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.259313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.259543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.259573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.259772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.259802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.260010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.260040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.260321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.260352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.260549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.260579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.260772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.260802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.261086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.261117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.261316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.261347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.261675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.261709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.261976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.262005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.262275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.262307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.262574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.262605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.262889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.262919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.263218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.263255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.263539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.263570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.263853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.263883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.264090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.432 [2024-07-15 13:02:25.264121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.432 qpair failed and we were unable to recover it. 00:27:54.432 [2024-07-15 13:02:25.264310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.264341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.264599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.264629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.264885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.264914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.265062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.265092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.265366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.265404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.265712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.265742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.266012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.266042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.266257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.266289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.266570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.266600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.266892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.266922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.267207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.267244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.267533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.267564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.267847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.267877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.268097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.268330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.268361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.268647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.268677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.268869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.268899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.269088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.269117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.269335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.269367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.269622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.269652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.269975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.270006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.270251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.270282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.270572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.270603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.270886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.270916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.271200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.271236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.271458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.271489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.271699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.271729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.271918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.271948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.272136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.272166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.272452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.272483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.272783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.272813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.273122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.273159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.273373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.273403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.273660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.273690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.273974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.274004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.274305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.274337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.274598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.274628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.274936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.274966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.275243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.275274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.275465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.275495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.275700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.433 [2024-07-15 13:02:25.275731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.433 qpair failed and we were unable to recover it. 00:27:54.433 [2024-07-15 13:02:25.276009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.276039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.276321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.276352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.276656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.276686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.276919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.276956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.277246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.277278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.277487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.277517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.277726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.277757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.278035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.278065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.278322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.278352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.278630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.278660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.278967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.278997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.279276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.279307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.279515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.279545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.279828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.279858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.280003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.280034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.280288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.280318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.280590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.280620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.280909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.280940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.281240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.281271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.281465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.281495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.281720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.281749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.281939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.281969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.282244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.282275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.282465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.282495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.282701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.282731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.282953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.282983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.283193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.283223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.283523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.283553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.283757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.283788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.284046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.284077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.284371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.284405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.284618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.284648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.434 [2024-07-15 13:02:25.284913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.434 [2024-07-15 13:02:25.284943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.434 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.285210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.285258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.285513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.285544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.285775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.285806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.286081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.286111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.286314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.286346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.286577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.286607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.286810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.286840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.287096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.287127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.287354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.287386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.287662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.287692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.287951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.287981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.288262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.288293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.288552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.288582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.288863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.288894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.289195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.289232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.289459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.289490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.289699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.289730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.290016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.290046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.290262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.290294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.290574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.290604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.290864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.290894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.291102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.291133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.291274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.291305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.291562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.291592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.291906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.291937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.292251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.292283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.292556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.292585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.292738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.292767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.292912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.292942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.293152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.293182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.293514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.293545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.293753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.293783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.294042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.294071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.294262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.294294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.294506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.294536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.294798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.294828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.294963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.294994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.295171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.295206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.295482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.295512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.295750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.435 [2024-07-15 13:02:25.295780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.435 qpair failed and we were unable to recover it. 00:27:54.435 [2024-07-15 13:02:25.296061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.296091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.296349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.296381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.296613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.296643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.296843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.296873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.297070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.297100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.297356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.297387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.297538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.297569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.297786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.297816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.298025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.298055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.298294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.298325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.298551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.298582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.298860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.298890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.299199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.299235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.299503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.299533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.299788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.299818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.300100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.300130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.300432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.300463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.300697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.300728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.301004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.301034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.301248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.301279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.301571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.301602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.301891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.301922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.302204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.302252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.302537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.302568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.302765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.302795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.302999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.303029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.303313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.303345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.303614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.303644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.303957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.303987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.304212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.304249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.304452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.304483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.304741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.304771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.304913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.304943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.305201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.305237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.305440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.305470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.305679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.305710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.305853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.305883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.306095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.306130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.306432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.306463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.306672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.306988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.436 [2024-07-15 13:02:25.307018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.436 qpair failed and we were unable to recover it. 00:27:54.436 [2024-07-15 13:02:25.307253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.307284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.307558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.307589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.307894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.307924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.308206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.308241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.308381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.308412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.308717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.308747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.309021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.309051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.309264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.309295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.309577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.309607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.309778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.309808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.310048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.310078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.310373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.310405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.310661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.310691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.310999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.311029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.311309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.311340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.311503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.311533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.311703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.311733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.312016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.312046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.312320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.312351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.312638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.312668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.312865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.312895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.313062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.313092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.313361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.313392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.313699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.313730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.314008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.314038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.314268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.314299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.314572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.314603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.314873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.314903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.315207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.315247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.315520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.315550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.315757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.315787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.316032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.316334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.316365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.316646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.316676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.316962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.316992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.317195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.317231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.317520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.317556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.317843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.317873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.318083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.318113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.318309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.318341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.437 [2024-07-15 13:02:25.318599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.437 [2024-07-15 13:02:25.318629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.437 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.318823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.318853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.319069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.319287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.319319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.319522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.319552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.319829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.319859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.320167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.320198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.320572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.320637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.320936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.320969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.321266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.321300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.321614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.321646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.321932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.321962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.322219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.322262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.322580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.322611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.322869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.322900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.323180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.323210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.323439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.323750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.323781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.324071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.324100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.324383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.324414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.324714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.324745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.324998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.325028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.325339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.325371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.325648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.325684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.325892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.325922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.326128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.326159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.326442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.326472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.326690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.326721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.327005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.327035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.327234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.327265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.327417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.327448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.327639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.327669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.327927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.327957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.328144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.328174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.328438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.328469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.438 qpair failed and we were unable to recover it. 00:27:54.438 [2024-07-15 13:02:25.328668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.438 [2024-07-15 13:02:25.328699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.328911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.328941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.329139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.329169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.329450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.329682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.329713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.329976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.330006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.330270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.330301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.330605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.330636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.330841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.330871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.331100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.331130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.331413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.331444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.331638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.331668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.331877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.331907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.332166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.332196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.332462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.332494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.332708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.332744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.333026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.333056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.333280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.333311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.333596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.333626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.333910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.333941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.334075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.334106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.334363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.334395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.334605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.334635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.334926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.334956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.335247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.335278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.335484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.335514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.335706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.335736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.335994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.336024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.336303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.336334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.336559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.336591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.336895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.336926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.337210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.337247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.337542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.337573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.337862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.337892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.338178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.338208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.338506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.338537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.338819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.338850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.339136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.339167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.339456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.339488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.339785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.339815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.340016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.340046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.439 qpair failed and we were unable to recover it. 00:27:54.439 [2024-07-15 13:02:25.340356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.439 [2024-07-15 13:02:25.340388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.340666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.340701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.341026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.341056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.341251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.341283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.341489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.341520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.341747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.341777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.341988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.342018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.342276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.342307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.342508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.342538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.342842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.342872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.343079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.343109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.343439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.343470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.343746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.343777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.343963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.343994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.344266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.344299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.344587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.344618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.344917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.344947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.345235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.345267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.345587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.345617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.345835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.345866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.346129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.346158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.346371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.346402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.346547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.346577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.346881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.346911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.347180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.347210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.347431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.347462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.347731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.347761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.348021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.348051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.440 qpair failed and we were unable to recover it. 00:27:54.440 [2024-07-15 13:02:25.348323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.440 [2024-07-15 13:02:25.348354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.348687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.348718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.348962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.348992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.349213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.349252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.349461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.349492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.349660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.349691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.349855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.350028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.350058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.350262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.350295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.350554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.350584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.350840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.350870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.351159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.351189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.351470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.351502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.351731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.351761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.351972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.352003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.352261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.352293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.352502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.352532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.352804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.352834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.353115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.353146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.353357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.353388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.353644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.353674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.353844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.353874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.354021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.354051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.354262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.354293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.354596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.354626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.354818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.354849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.355062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.355093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.355378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.355408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.355619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.355650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.355907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.355937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.356133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.356163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.356364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.356396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.356677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.356707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.356957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.356987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.357139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.357169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.357442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.357473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.725 [2024-07-15 13:02:25.357778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.725 [2024-07-15 13:02:25.357809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.725 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.358018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.358048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.358278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.358309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.358589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.358619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.358908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.358938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.359083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.359119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.359352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.359383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.359637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.359668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.359939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.359969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.360179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.360209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.360502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.360533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.360828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.360858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.361067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.361097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.361380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.361412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.361716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.361746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.362023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.362053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.362307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.362339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.362619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.362649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.362907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.362937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.363141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.363172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.363467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.363498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.363781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.363811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.364067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.364097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.364307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.364338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.364546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.364576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.364715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.364745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.364954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.364984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.365262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.365294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.365554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.365584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.365788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.365818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.366051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.366080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.366336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.366367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.366627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.366663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.366945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.366975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.367261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.367292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.367517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.367548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.367762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.367792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.367931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.367961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.368188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.368219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.368516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.368546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.368805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.726 [2024-07-15 13:02:25.368836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.726 qpair failed and we were unable to recover it. 00:27:54.726 [2024-07-15 13:02:25.369137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.369166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.369326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.369357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.369561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.369592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.369866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.369898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.370158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.370188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.370429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.370461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.370659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.370690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.370980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.371011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.371217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.371273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.371499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.371530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.371760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.371790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.372100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.372131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.372440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.372472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.372765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.373078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.373109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.373392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.373423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.373738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.373768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.373996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.374027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.374309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.374341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.374495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.374526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.374731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.374761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.374988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.375018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.375299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.375605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.375635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.375836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.375867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.376080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.376110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.376338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.376370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.376597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.376627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.376879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.376910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.377172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.377203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.377422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.377453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.377645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.377677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.377988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.378057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.378366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.378403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.378632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.378665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.378890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.378921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.379131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.379162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.379411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.379443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.379690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.379720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.380013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.380043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.380327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.727 [2024-07-15 13:02:25.380359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.727 qpair failed and we were unable to recover it. 00:27:54.727 [2024-07-15 13:02:25.380506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.380536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.380700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.380730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.381007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.381037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.381301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.381332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.381549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.381588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.381801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.381831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.382140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.382170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.382392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.382424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.382695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.382725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.383005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.383036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.383217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.383257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.383541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.383571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.383766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.383796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.384056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.384086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.384396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.384427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.384703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.384733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.385064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.385095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.385301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.385333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.385546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.385576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.385790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.385820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.386030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.386060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.386314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.386345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.386486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.386515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.386752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.386782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.387075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.387106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.387403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.387434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.387741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.387770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.387968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.387999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.388204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.388256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.388467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.388498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.388702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.388731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.388975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.389012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.389295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.389326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.389535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.389565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.389772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.389803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.390008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.390038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.390265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.390296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.390453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.390483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.390718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.390748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.390960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.390990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.391147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.728 [2024-07-15 13:02:25.391176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.728 qpair failed and we were unable to recover it. 00:27:54.728 [2024-07-15 13:02:25.391477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.391508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.391766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.391796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.392013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.392246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.392277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.392495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.392526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.392737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.392767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.392974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.393004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.393293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.393325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.393585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.393615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.393885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.393915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.394152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.394183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.394369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.394400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.394592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.394622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.394777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.394807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.395016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.395046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.395367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.395398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.395553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.395583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.395791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.395827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.396099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.396130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.396350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.396382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.396638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.396668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.397002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.397032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.397270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.397301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.397556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.397587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.397816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.397846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.398047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.398077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.398288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.398319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.398565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.398596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.398826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.398856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.399135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.399166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.399495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.399526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.399813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.399844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.400105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.729 [2024-07-15 13:02:25.400135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.729 qpair failed and we were unable to recover it. 00:27:54.729 [2024-07-15 13:02:25.400434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.400466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.400756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.400786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.401088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.401118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.401326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.401357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.401634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.401665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.401825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.401855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.402137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.402167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.402514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.402545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.402747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.402777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.403092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.403123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.403403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.403435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.403640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.403675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.403971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.404001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.404285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.404316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.404574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.404605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.404814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.404844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.405063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.405094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.405236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.405267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.405484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.405515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.405803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.405834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.406137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.406167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.406451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.406482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.406771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.406801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.407069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.407099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.407295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.407327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.407611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.407642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.407898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.407928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.408207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.408249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.408450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.408480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.408755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.408786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.409090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.409121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.409392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.409424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.409624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.409655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.409885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.409916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.410190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.410221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.410519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.410550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.410859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.410889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.411171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.411202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.411405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.411442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.411708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.411738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.411892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.730 [2024-07-15 13:02:25.411925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.730 qpair failed and we were unable to recover it. 00:27:54.730 [2024-07-15 13:02:25.412202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.412242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.412499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.412529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.412831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.412861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.413141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.413171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.413485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.413517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.413727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.413757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.414013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.414043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.414323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.414354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.414660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.414690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.414990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.415020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.415305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.415338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.415634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.415702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.415982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.416016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.416308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.416343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.416563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.416595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.416882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.416913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.417221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.417260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.417482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.417512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.417729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.417759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.418039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.418069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.418302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.418334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.418542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.418573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.418852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.418886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.419143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.419173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.419374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.419415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.419570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.419601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.419829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.420138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.420168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.420440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.420471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.420735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.420766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.421079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.421110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.421382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.421413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.421670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.421700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.421979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.422009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.422146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.422176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.422490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.422521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.422815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.422846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.423133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.423163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.423372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.423403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.423551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.423582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.423862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.731 [2024-07-15 13:02:25.423892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.731 qpair failed and we were unable to recover it. 00:27:54.731 [2024-07-15 13:02:25.424150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.424181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.424489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.424520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.424802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.424832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.424995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.425025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.425221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.425263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.425487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.425518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.425822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.425852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.426130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.426161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.426377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.426409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.426694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.426724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.426942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.426977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.427166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.427196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.427261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111b000 (9): Bad file descriptor 00:27:54.732 [2024-07-15 13:02:25.427569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.427636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.427932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.427964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.428103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.428133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.428439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.428470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.428702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.428732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.428950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.428981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.429200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.429327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.429564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.429603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.429820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.429852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.430159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.430190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.430404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.430436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.430712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.430743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.430947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.430977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.431282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.431315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.431595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.431625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.431822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.431852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.432060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.432091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.432376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.432408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.432692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.432723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.432927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.432958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.433153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.433183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.433452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.433483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.433702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.433733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.433870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.433901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.434102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.434139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.434341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.434373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.434562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.434592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.434795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.434825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.732 [2024-07-15 13:02:25.435095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.732 [2024-07-15 13:02:25.435125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.732 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.435266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.435299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.435502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.435532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.435737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.435768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.435963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.435994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.436186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.436216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.436433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.436464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.436668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.436698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.436981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.437012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.437200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.437238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.437451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.437482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.437691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.437722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.437862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.437893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.438026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.438057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.438272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.438303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.438559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.438590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.438811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.438843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.438999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.439030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.439222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.439268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.439422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.439453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.439712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.439743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.439961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.439992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.440180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.440211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.440454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.440485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.440691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.440722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.440910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.440941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.441134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.441166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.441404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.441436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.441630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.441660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.441800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.441831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.442037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.442067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.442276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.442308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.442501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.442532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.442672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.442702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.442908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.733 [2024-07-15 13:02:25.442939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.733 qpair failed and we were unable to recover it. 00:27:54.733 [2024-07-15 13:02:25.443149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.443180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.443345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.443382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.443544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.443575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.443834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.443864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.443993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.444024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.444236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.444268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.444386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.444416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.444684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.444715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.444902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.444933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.445059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.445089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.445316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.445348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.445551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.445581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.445703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.445733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.445871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.445902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.446092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.446123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.446337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.446369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.446506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.446536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.446811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.446841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.447028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.447058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.447355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.447386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.447642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.447672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.447957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.447987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.448199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.448237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.448433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.448464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.448669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.448700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.448902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.448933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.449126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.449156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.449367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.449398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.449605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.449635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.449789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.449820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.449979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.450009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.450244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.450276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.450479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.450510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.450767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.450798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.450953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.450983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.451179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.451210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.451428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.451460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.451596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.451626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.451765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.451795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.452076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.452106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.734 [2024-07-15 13:02:25.452235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.734 [2024-07-15 13:02:25.452266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.734 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.452464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.452500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.452759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.452789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.452927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.452958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.453150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.453181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.453331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.453362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.453550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.453580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.453725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.453756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.454019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.454049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.454309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.454341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.454555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.454585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.454867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.454897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.455052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.455083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.455285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.455317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.455598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.455629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.455900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.455931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.456241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.456272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.456474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.456505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.456695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.456725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.457004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.457034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.457187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.457217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.457484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.457515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.457794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.457824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.458083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.458114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.458306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.458338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.458542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.458573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.458777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.458807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.459062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.459092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.459335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.459402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.459648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.459681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.459925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.459956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.460250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.460282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.460496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.460526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.460730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.460760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.460917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.460947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.461142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.461172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.461461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.461492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.461696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.461726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.461992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.462022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.462311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.462342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.462554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.462584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.735 [2024-07-15 13:02:25.462771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.735 [2024-07-15 13:02:25.462809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.735 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.462953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.462983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.463122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.463151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.463348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.463379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.463584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.463614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.463754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.463784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.463914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.463944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.464235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.464267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.464407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.464436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.464649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.464679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.464887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.464917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.465172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.465201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.465353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.465383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.465639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.465668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.465874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.465904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.466192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.466222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.466438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.466468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.466724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.466754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.466900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.466929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.467140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.467170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.467436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.467467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.467612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.467643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.467898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.467928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.468134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.468163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.468444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.468476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.468637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.468667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.468895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.468924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.469068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.469109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.469406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.469439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.469579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.469610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.469866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.469896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.470185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.470215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.470374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.470405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.470664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.470695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.470971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.471001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.471146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.471176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.471316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.471347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.471480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.471511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.471769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.471799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.472027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.472056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.472257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.472288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.736 [2024-07-15 13:02:25.472501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.736 [2024-07-15 13:02:25.472532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.736 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.472732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.472762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.472973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.473002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.473141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.473172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.473391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.473422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.473639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.473670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.473927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.473958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.474243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.474274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.474497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.474526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.474717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.474747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.474997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.475027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.475243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.475275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.475480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.475510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.475658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.475693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.475895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.475925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.476114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.476144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.476403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.476434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.476565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.476595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.476785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.476965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.476995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.477288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.477320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.477579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.477609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.477913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.477944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.478377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.478412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.478624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.478655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.478856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.478887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.479148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.479181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.479513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.479549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.479854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.479884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.480131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.480161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.480372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.480405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.480665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.480695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.480902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.480932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.481142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.481171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.481331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.481364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.481562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.481593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.481789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.481819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.481970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.482001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.737 qpair failed and we were unable to recover it. 00:27:54.737 [2024-07-15 13:02:25.482141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.737 [2024-07-15 13:02:25.482172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.482440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.482471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.482636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.482670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.482945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.482974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.483121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.483151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.483389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.483420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.483700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.483730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.483954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.483984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.484266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.484297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.484577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.484606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.484816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.484845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.485060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.485089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.485400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.485430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.485634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.485664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.485872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.485902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.486104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.486138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.486425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.486456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.486647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.486676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.486885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.486915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.487195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.487236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.487454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.487484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.487683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.487713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.487916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.487946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.488146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.488175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.488334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.488366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.488556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.488586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.488787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.488817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.489026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.489055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.489256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.489287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.489551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.489581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.489839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.489868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.490096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.490125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.490348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.490378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.490486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.490517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.490773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.490803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.491103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.491132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.491395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.491426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.491628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.491658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.738 qpair failed and we were unable to recover it. 00:27:54.738 [2024-07-15 13:02:25.491933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.738 [2024-07-15 13:02:25.491962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.492152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.492181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.492395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.492425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.492614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.492644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.492796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.492842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.492990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.493021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.493241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.493272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.493495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.493526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.493685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.493714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.494006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.494036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.494264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.494295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.494444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.494474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.494616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.494646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.494884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.494914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.495063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.495093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.495400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.495432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.495681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.495711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.495921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.495959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.496219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.496260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.496549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.496689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.496720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.496976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.497006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.497284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.497315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.497518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.497549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.497696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.497726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.498009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.498040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.498261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.498292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.498555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.498585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.498803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.498833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.498972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.499002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.499153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.499183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.499406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.499437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.499718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.499749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.499869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.499898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.500056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.500086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.500344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.500375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.500528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.500558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.500795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.500825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.501029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.501058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.501246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.501277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.501563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.501593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.739 qpair failed and we were unable to recover it. 00:27:54.739 [2024-07-15 13:02:25.501778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.739 [2024-07-15 13:02:25.501808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.502011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.502042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.502195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.502236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.502454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.502494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.502771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.502801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.502925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.502955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.503146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.503176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.503462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.503494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.503693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.503723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.503924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.503954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.504173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.504204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.504418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.504449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.504652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.504683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.504890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.504920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.505125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.505156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.505413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.505444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.505704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.505735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.505932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.505962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.506163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.506193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.506353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.506384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.506570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.506600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.506795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.506825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.506973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.507004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.507237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.507269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.507463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.507494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.507687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.507719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.507839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.507869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.508081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.508112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.508322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.508354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.508542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.508572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.508771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.508806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.509093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.509123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.509401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.509434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.509637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.509668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.509940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.509971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.510278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.510310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.510495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.510526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.510669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.510700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.510841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.510871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.511180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.511211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.511441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.511472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.740 [2024-07-15 13:02:25.511622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.740 [2024-07-15 13:02:25.511652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.740 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.511795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.511827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.512018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.512049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.512312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.512344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.512534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.512564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.512868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.512898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.513128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.513160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.513366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.513398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.513544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.513574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.513841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.513872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.514155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.514185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.514426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.514458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.514665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.514695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.514951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.514982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.515170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.515201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.515415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.515445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.515645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.515681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.515884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.515914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.516117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.516147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.516352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.516384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.516589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.516620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.516906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.516936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.517137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.517167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.517459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.517491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.517632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.517663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.517881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.517911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.518187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.518219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.518378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.518409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.518669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.518699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.518986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.519017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.519245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.519277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.519540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.519571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.519829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.519859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.520053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.520083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.520340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.520372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.520577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.520607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.520797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.520828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.521087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.521119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.521416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.521447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.521591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.521623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.521773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.521803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.741 [2024-07-15 13:02:25.521930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.741 [2024-07-15 13:02:25.521961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.741 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.522163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.522194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.522424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.522455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.522689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.522720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.522989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.523020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.523236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.523268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.523572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.523602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.523882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.523912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.524129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.524159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.524386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.524417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.524607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.524638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.524873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.524904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.525187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.525217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.525479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.525511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.525767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.525798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.525915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.525946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.526242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.526279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.526502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.526532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.526759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.526789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.526927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.526956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.527166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.527196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.527416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.527455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.527745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.527778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.527917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.527947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.528204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.528245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.528369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.528399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.528608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.528638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.528920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.528950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.529265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.529297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.529554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.529584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.529731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.529762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.530019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.530050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.530243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.530274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.530464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.530495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.530720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.530751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.530970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.742 [2024-07-15 13:02:25.531000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.742 qpair failed and we were unable to recover it. 00:27:54.742 [2024-07-15 13:02:25.531213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.531253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.531493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.531523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.531723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.531753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.531951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.531981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.532191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.532222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.532356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.532387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.532589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.532619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.532906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.532941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.533102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.533133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.533282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.533314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.533466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.533496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.533696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.533726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.533889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.533920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.534066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.534097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.534351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.534382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.534606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.534637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.534908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.534940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.535196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.535234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.535492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.535523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.535773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.535804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.536012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.536048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.536246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.536278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.536489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.536520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.536655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.536685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.536898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.536929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.537119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.537149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.537377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.537409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.537559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.537589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.537732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.537763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.538018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.538049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.538275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.538307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.538525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.538556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.538751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.538781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.538913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.538945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.539214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.539253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.539466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.539497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.539703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.539734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.539935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.539966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.540197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.540240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.540388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.540418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.743 [2024-07-15 13:02:25.540622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.743 [2024-07-15 13:02:25.540654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.743 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.540803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.540834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.540968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.540999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.541195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.541236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.541497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.541528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.541738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.541768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.541987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.542019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.542247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.542523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.542555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.542697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.542728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.542948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.542979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.543105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.543135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.543331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.543363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.543515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.543545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.543778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.543809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.544005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.544036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.544188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.544218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.544442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.544473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.544757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.544788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.544913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.544944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.545065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.545109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.545392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.545424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.545559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.545591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.545723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.545753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.545960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.545991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.546114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.546144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.546352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.546384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.546500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.546530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.546814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.546844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.546995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.547026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.547235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.547268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.547527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.547557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.547696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.547726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.548000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.548030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.548175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.548205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.548491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.548523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.548814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.548845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.548995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.549026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.549283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.549315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.549513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.549543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.549739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.549770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.744 [2024-07-15 13:02:25.549972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.744 [2024-07-15 13:02:25.550002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.744 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.550260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.550291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.550573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.550604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.550742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.550772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.550902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.550933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.551069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.551100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.551252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.551288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.551496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.551527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.551765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.551795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.551949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.551980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.552178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.552208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.552418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.552449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.552755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.552786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.552993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.553022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.553167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.553198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.553443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.553476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.553708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.553739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.553940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.553971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.554239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.554270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.554479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.554509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.554726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.554756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.554900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.554931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.555200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.555240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.555441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.555472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.555625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.555655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.555915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.555947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.556162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.556193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.556332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.556365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.556651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.556681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.556890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.556920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.557063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.557093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.557303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.557335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.557550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.557580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.557729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.557759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.558056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.558086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.558299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.558330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.558596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.558626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.558840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.558871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.559015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.559044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.559349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.559382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.559641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.745 [2024-07-15 13:02:25.559671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.745 qpair failed and we were unable to recover it. 00:27:54.745 [2024-07-15 13:02:25.559779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.559809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.560098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.560128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.560405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.560435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.560628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.560658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.560866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.560896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.561101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.561137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.561355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.561385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.561524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.561554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.561765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.561796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.562078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.562108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.562296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.562329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.562518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.562548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.562777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.562807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.563027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.563058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.563181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.563211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.563496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.563528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.563735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.563765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.564041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.564070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.564349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.564380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.564531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.564561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.564757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.564788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.564994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.565025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.565301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.565333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.565565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.565595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.565814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.565844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.566130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.566161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.566355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.566387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.566524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.566554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.566710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.566740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.566880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.566910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.567188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.567218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.567359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.567389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.567535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.567565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.567722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.567752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.567941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.746 [2024-07-15 13:02:25.567971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.746 qpair failed and we were unable to recover it. 00:27:54.746 [2024-07-15 13:02:25.568180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.568210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.568416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.568448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.568646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.568676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.568810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.568840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.569101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.569131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.569429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.569461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.569613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.569643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.569928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.569958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.570217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.570265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.570547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.570577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.570730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.570765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.570913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.570943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.571147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.571176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.571396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.571427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.571707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.571737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.571954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.571984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.572205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.572244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.572468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.572497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.572700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.572730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.572869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.572899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.573155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.573185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.573381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.573411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.573618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.573648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.573919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.573949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.574267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.574299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.574566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.574596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.574901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.574931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.575214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.575252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.575569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.575599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.575882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.575912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.576200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.576237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.576521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.576551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.576836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.576865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.577153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.577182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.577474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.577505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.577791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.577821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.578108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.578137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.578369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.578400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.578674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.578704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.579011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.579041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.747 [2024-07-15 13:02:25.579323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.747 [2024-07-15 13:02:25.579354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.747 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.579541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.579571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.579845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.579875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.580078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.580108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.580390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.580421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.580728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.580757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.581014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.581044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.581353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.581384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.581528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.581558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.581813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.581843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.582124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.582164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.582461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.582493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.582768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.582798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.582997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.583027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.583304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.583334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.583588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.583618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.583885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.583914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.584074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.584104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.584383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.584413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.584717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.584748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.585027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.585057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.585265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.585295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.585552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.585582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.585781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.585811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.586075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.586104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.586369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.586401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.586709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.586738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.587033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.587063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.587324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.587355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.587636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.587665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.587928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.587958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.588187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.588217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.588375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.588405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.588661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.588691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.588950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.588980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.589241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.589272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.589413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.589442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.589709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.589739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.589948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.589978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.590302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.590332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.590612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.590642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.748 [2024-07-15 13:02:25.590899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.748 [2024-07-15 13:02:25.590929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.748 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.591167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.591196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.591547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.591584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.591903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.591934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.592146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.592176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.592469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.592499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.592784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.592814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.593097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.593130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.593362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.593393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.593550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.593586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.593823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.593853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.594014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.594044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.594245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.594275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.594576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.594607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.594894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.594924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.595210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.595248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.595506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.595536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.595851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.595881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.596154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.596185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.596498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.596529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.596801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.596832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.597136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.597166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.597381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.597412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.597687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.597717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.598023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.598053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.598332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.598364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.598572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.598602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.598805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.598835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.599115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.599145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.599379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.599409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.599690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.599720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.599954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.599984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.600191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.600221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.600509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.600539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.600688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.600717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.600943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.600973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.601134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.601164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.601453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.601483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.601709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.601739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.601956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.601986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.602191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.749 [2024-07-15 13:02:25.602220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.749 qpair failed and we were unable to recover it. 00:27:54.749 [2024-07-15 13:02:25.602518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.602548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.602748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.602778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.603080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.603109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.603310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.603341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.603652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.603682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.603891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.603921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.604125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.604155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.604418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.604450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.750 [2024-07-15 13:02:25.604718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.604749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:54.750 [2024-07-15 13:02:25.605034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.605064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.605312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.605343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.750 [2024-07-15 13:02:25.605642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.605672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.750 [2024-07-15 13:02:25.605897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.605927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.606243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.606274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.606582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.606613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.606883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.606913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.607195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.607236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.607530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.607561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.607863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.607893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.608206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.608248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.608402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.608432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.608719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.608750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.608954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.608985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.609189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.609219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.609438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.609469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.609679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.609710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.609945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.609975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.610255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.610287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.610412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.610443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.610631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.610661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.610901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.610931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.611101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.611134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.750 [2024-07-15 13:02:25.611442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.750 [2024-07-15 13:02:25.611472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa950000b90 with addr=10.0.0.2, port=4420 00:27:54.750 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.611783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.611819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.612025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.612056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.612262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.612294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.612558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.612588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.612796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.612825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.613121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.613151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.613366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.613397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.613701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.613730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.613916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.613945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.614155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.614186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.614338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.614371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.614684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.614714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.614921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.614951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.615292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.615328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.615620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.615650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.615881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.615913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.616112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.616141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.616453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.616485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.616625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.616655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.616801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.616831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.617025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.617055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.617181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.617211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.617429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.617459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.617686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.617715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.617987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.618017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.618216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.618256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.618548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.618578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.618795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.618825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.619033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.619064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.619271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.619302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.619441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.619471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.619707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.619738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.619867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.619897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.620129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.620158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.620354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.620386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.620521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.620550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.620789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.620819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.621028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.621058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.621210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.621248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.621486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.621516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.751 qpair failed and we were unable to recover it. 00:27:54.751 [2024-07-15 13:02:25.621784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.751 [2024-07-15 13:02:25.621829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.622021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.622053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.622211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.622254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.622445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.622476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.622626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.622655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.622809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.622838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.623034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.623064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.623215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.623260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.623483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.623516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.623721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.623752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.623951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.623981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.624095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.624125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.624323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.624355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.624485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.624516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.624659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.624690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.624891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.624921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.625060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.625091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.625374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.625405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.625630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.625660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.625764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.625794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.625942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.625973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.626185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.626216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.626428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.626458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.626608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.626639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.626903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.626933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.627066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.627096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.627324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.627355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.627505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.627541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.627684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.627714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.627937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.627967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.628141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.628171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.628334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.628367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.628504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.628534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.628668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.628700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.628826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.628856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.628993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.629025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.629182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.629213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.629359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.629390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.629529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.629560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.629679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.629710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.629913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.629943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.752 [2024-07-15 13:02:25.630136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.752 [2024-07-15 13:02:25.630167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.752 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.630364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.630397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.630538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.630570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.630802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.630833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.630980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.631012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.631144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.631175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.631394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.631425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.631618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.631650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.631808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.631839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.632029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.632060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.632204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.632248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.632460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.632490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.632632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.632662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.632853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.632888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.633043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.633074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.633240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.633272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.633468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.633498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.633639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.633669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.633802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.633832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.634030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.634060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.634194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.634234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.634363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.634394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.634611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.634642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.634793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.634823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.634954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.634984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.635102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.635133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.635322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.635354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.635490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.635521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.635723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.635754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.635900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.635931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.636062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.636092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.636282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.636314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.636446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.636476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.636737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.636770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.636901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.636933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.637137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.637168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.637315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.637346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.637470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.637500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.637633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.637663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.637935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.637966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.638281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.638318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.753 [2024-07-15 13:02:25.638474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.753 [2024-07-15 13:02:25.638505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.753 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.638718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.638749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.754 [2024-07-15 13:02:25.639026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.639059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.639265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.639297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:54.754 [2024-07-15 13:02:25.639512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.639544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.754 [2024-07-15 13:02:25.639754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.639786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.639920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.754 [2024-07-15 13:02:25.639951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.640159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.640189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.640404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.640436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.640647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.640677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.640957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.640988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.641256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.641292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.641498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.641528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.641720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.641750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.642072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.642321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.642352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.642566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.642596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.642792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.642822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.643077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.643107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.643307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.643338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.643545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.643575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.643721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.643751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.643965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.643994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.644186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.644216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.644383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.644419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.644695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.644726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.644974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.645003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.645198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.645237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.645388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.645418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.645619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.645649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.645811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.645841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.646118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.646148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.646339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.646371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.646627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.646656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.646820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.646851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.647106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.647136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.647275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.647306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.647449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.647480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.647697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.647727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.647880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.754 [2024-07-15 13:02:25.647909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.754 qpair failed and we were unable to recover it. 00:27:54.754 [2024-07-15 13:02:25.648118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.648148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.648427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.648458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.648624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.648654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.648815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.648845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.649009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.649039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.649198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.649236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.649392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.649422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.649632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.649662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.649906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.649936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.650245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.650276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.650511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.650542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.650677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.650712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.650926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.650956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.651244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.651275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.651484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.651514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.651708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.651737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.652090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.652120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.652441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.652475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.652690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.652720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.652865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.652895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.653110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.653140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.653364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.653395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.653548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.653579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:54.755 [2024-07-15 13:02:25.653749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.755 [2024-07-15 13:02:25.653782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:54.755 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.653939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.653970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.654246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.654279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.654487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.654519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.654678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.654709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.655013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.655045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.655321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.655353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.655638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.655670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.655950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.655981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.656177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.656209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.656375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.656407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.656632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.656664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.656885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.018 [2024-07-15 13:02:25.656916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.018 qpair failed and we were unable to recover it. 00:27:55.018 [2024-07-15 13:02:25.657174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.657205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.657499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.657534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.657881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.657912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.658073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.658104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.658365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.658396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.658551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.658581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.658837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.658868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.659078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.659108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.659336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.659367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.659576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.659606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.659946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.659976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.660115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.660145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.660336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.660368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.660647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.660677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 Malloc0 00:27:55.019 [2024-07-15 13:02:25.660828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.660858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.661106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.661141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.661404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.661435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.019 [2024-07-15 13:02:25.661582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.661612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:55.019 [2024-07-15 13:02:25.661894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.661925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.662183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.019 [2024-07-15 13:02:25.662213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.019 [2024-07-15 13:02:25.662493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.662525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.662692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.662722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.663026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.663056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.019 [2024-07-15 13:02:25.663202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.019 [2024-07-15 13:02:25.663240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.019 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.663404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.663434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.663599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.663629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.663903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.663932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.664165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.664195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa940000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.664486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.664528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110ced0 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.664730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.664771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.665063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.665094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.665373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.665404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.665636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.665668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.665909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.665940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.666211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.666251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.666406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.666437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.666649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.666679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.666995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.667026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.667281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.667313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.667542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.667572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.667827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.667864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.668181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.668212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.668334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.020 [2024-07-15 13:02:25.668502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.668534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.668768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.668799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.669081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.669113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.669362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.669393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.669721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.669752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.669969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.669999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.670205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.670244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.670525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.020 [2024-07-15 13:02:25.670556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.020 qpair failed and we were unable to recover it. 00:27:55.020 [2024-07-15 13:02:25.670756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.670786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.671045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.671076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.671380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.671412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.671635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.671670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.671976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.672006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.672292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.672324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.672557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.672588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.672806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.672836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.673125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.673156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.673450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.673485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.673697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.673728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.021 [2024-07-15 13:02:25.673871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.673902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.674091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.674122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.021 [2024-07-15 13:02:25.674281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.674314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.021 [2024-07-15 13:02:25.674597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.674629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.021 [2024-07-15 13:02:25.674869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.674901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.675184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.675214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.675381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.675413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.675624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.675654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.675927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.675957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.676213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.676251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.676462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.676492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.676748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.676779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.677058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.677088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.677334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.021 [2024-07-15 13:02:25.677365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.021 qpair failed and we were unable to recover it. 00:27:55.021 [2024-07-15 13:02:25.677578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.677608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.677837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.677867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.678000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.678030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.678334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.678372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.678685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.678716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.678998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.679028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.679284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.679315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.679538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.679569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.679773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.679803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.680028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.680058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.680218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.680267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.680425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.680456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.680671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.680700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.680856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.680886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.681026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.681056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.681348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.681380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.681588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.681619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.681844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.681875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.682016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.682047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.682241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.682272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.682553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.682584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.682806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.682837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.682993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.683023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.683233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.683265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.683429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.683459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.683601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.022 [2024-07-15 13:02:25.683634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.022 qpair failed and we were unable to recover it. 00:27:55.022 [2024-07-15 13:02:25.683790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.683820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.683972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.684002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.684131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.684162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.684376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.684407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.684674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.684705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.684915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.684946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.685238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.685270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.685433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.685463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.685653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.685684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.023 [2024-07-15 13:02:25.685958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.685989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.023 [2024-07-15 13:02:25.686254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.686286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.023 [2024-07-15 13:02:25.686590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.686621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 [2024-07-15 13:02:25.686887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.686918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.687222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.687269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.687471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.687501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.687696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.687726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.687993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.688023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.688302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.688334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.688556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.688586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.688868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.688898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.689186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.689217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.689447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.689479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.689691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.689722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.689987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.023 qpair failed and we were unable to recover it. 00:27:55.023 [2024-07-15 13:02:25.690280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.023 [2024-07-15 13:02:25.690312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.690568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.690599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.690806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.690837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.691116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.691146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.691411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.691442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.691660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.691691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.691985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.692016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.692285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.692316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.692585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.692616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.692758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.692789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.692994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.693024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.693239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.693270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.693482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.693512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.024 [2024-07-15 13:02:25.693744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.693774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.693983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.694014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.024 [2024-07-15 13:02:25.694222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.694262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.024 [2024-07-15 13:02:25.694519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.694549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.024 [2024-07-15 13:02:25.694797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.694828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.695057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.695087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.695294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.695326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.695619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.695649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.695937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.695967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.696117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.696147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.696436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.696467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.024 [2024-07-15 13:02:25.696775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.024 [2024-07-15 13:02:25.696805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.024 qpair failed and we were unable to recover it. 00:27:55.025 [2024-07-15 13:02:25.697077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.025 [2024-07-15 13:02:25.697108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa948000b90 with addr=10.0.0.2, port=4420 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 [2024-07-15 13:02:25.697365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.025 [2024-07-15 13:02:25.698926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.025 [2024-07-15 13:02:25.699056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.025 [2024-07-15 13:02:25.699105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.025 [2024-07-15 13:02:25.699128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.025 [2024-07-15 13:02:25.699149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.025 [2024-07-15 13:02:25.699199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.025 qpair failed and we were unable to recover it. 00:27:55.025 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.025 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.025 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.025 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.025 [2024-07-15 13:02:25.708883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.025 [2024-07-15 13:02:25.709003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.025 [2024-07-15 13:02:25.709040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.709059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.709076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.709115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.026 13:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1876059 00:27:55.026 [2024-07-15 13:02:25.718874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.718956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.718981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.718992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.719003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.719028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.728808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.728881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.728897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.728905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.728913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.728929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.738824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.738886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.738901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.738908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.738914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.738931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.748825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.748885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.748902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.748910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.748916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.748931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.758940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.758997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.759011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.759018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.759024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.759039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.768941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.769016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.769033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.769041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.769047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.769061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.778979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.779041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.779057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.779063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.779069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.779084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.788923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.788990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.789006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.789013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.789020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.789034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.026 qpair failed and we were unable to recover it. 00:27:55.026 [2024-07-15 13:02:25.798966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.026 [2024-07-15 13:02:25.799022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.026 [2024-07-15 13:02:25.799038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.026 [2024-07-15 13:02:25.799045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.026 [2024-07-15 13:02:25.799052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.026 [2024-07-15 13:02:25.799067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.809042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.809105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.809120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.809127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.809133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.809148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.819059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.819131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.819146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.819153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.819159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.819173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.829077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.829134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.829150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.829157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.829167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.829183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.839050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.839107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.839122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.839129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.839135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.839150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.849131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.849194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.849208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.849215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.849221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.849240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.859144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.859209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.859223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.859235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.859240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.859254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.869217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.869281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.869295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.869302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.869308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.869323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.879260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.879323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.879339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.879346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.879351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.879366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.889233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.889298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.889313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.889319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.889325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.889340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.899295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.899359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.899374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.899380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.899386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.899400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.909266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.909325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.909340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.909346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.909352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.909367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.919292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.027 [2024-07-15 13:02:25.919346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.027 [2024-07-15 13:02:25.919360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.027 [2024-07-15 13:02:25.919370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.027 [2024-07-15 13:02:25.919375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.027 [2024-07-15 13:02:25.919390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.027 qpair failed and we were unable to recover it. 00:27:55.027 [2024-07-15 13:02:25.929366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.028 [2024-07-15 13:02:25.929427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.028 [2024-07-15 13:02:25.929442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.028 [2024-07-15 13:02:25.929449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.028 [2024-07-15 13:02:25.929455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.028 [2024-07-15 13:02:25.929469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-07-15 13:02:25.939531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.028 [2024-07-15 13:02:25.939595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.028 [2024-07-15 13:02:25.939609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.028 [2024-07-15 13:02:25.939615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.028 [2024-07-15 13:02:25.939621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.028 [2024-07-15 13:02:25.939635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-07-15 13:02:25.949461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.028 [2024-07-15 13:02:25.949536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.028 [2024-07-15 13:02:25.949551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.028 [2024-07-15 13:02:25.949557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.028 [2024-07-15 13:02:25.949563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.028 [2024-07-15 13:02:25.949577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.028 [2024-07-15 13:02:25.959507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.028 [2024-07-15 13:02:25.959563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.028 [2024-07-15 13:02:25.959577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.028 [2024-07-15 13:02:25.959584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.028 [2024-07-15 13:02:25.959590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.028 [2024-07-15 13:02:25.959604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.028 qpair failed and we were unable to recover it. 00:27:55.288 [2024-07-15 13:02:25.969503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.288 [2024-07-15 13:02:25.969565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.288 [2024-07-15 13:02:25.969579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.288 [2024-07-15 13:02:25.969586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.288 [2024-07-15 13:02:25.969593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.288 [2024-07-15 13:02:25.969607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.288 qpair failed and we were unable to recover it. 00:27:55.288 [2024-07-15 13:02:25.979469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.288 [2024-07-15 13:02:25.979531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.288 [2024-07-15 13:02:25.979545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.288 [2024-07-15 13:02:25.979552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.288 [2024-07-15 13:02:25.979558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:25.979572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:25.989604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:25.989663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:25.989678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:25.989684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:25.989690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:25.989704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:25.999579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:25.999637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:25.999651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:25.999657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:25.999663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:25.999677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.009590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.009650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.009664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.009674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.009680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.009694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.019610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.019675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.019689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.019695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.019701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.019715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.029643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.029697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.029711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.029717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.029723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.029737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.039619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.039717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.039731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.039738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.039744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.039758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.049689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.049747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.049762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.049768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.049774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.049788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.059714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.059778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.059792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.059799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.059805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.059819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.069691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.069748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.069762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.069768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.069774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.069788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.079805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.079871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.079885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.079892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.079897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.079912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.089821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.089878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.089892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.089898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.089905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.089919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.099842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.099897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.099915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.099921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.099928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.099942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.109872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.109928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.109943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.109950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.109955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.109969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.119922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.289 [2024-07-15 13:02:26.119979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.289 [2024-07-15 13:02:26.119993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.289 [2024-07-15 13:02:26.119999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.289 [2024-07-15 13:02:26.120005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.289 [2024-07-15 13:02:26.120019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.289 qpair failed and we were unable to recover it. 00:27:55.289 [2024-07-15 13:02:26.129929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.129993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.130008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.130014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.130020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.130033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.139958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.140017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.140032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.140038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.140044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.140061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.149955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.150015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.150030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.150036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.150042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.150056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.160010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.160067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.160081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.160088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.160094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.160108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.170088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.170146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.170160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.170166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.170172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.170186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.180070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.180122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.180136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.180142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.180148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.180162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.190104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.190166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.190183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.190190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.190196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.190209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.200135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.200193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.200207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.200214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.200220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.200238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.210161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.210219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.210237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.210243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.210250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.210264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.220187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.220251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.220266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.220272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.220278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.220292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.230223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.230288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.230302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.230308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.230318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.230331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.290 [2024-07-15 13:02:26.240275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.290 [2024-07-15 13:02:26.240334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.290 [2024-07-15 13:02:26.240349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.290 [2024-07-15 13:02:26.240355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.290 [2024-07-15 13:02:26.240361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.290 [2024-07-15 13:02:26.240375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.290 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.250282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.250339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.250353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.250359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.250365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.250379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.260302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.260380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.260395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.260402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.260407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.260422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.270347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.270438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.270452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.270458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.270465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.270480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.280380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.280441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.280455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.280462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.280467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.280482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.290379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.290444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.290459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.290465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.290471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.290485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.300413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.300489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.300503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.300510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.300516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.300530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.310390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.310450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.310464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.310471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.310477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.310491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.320504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.320563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.320578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.320588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.320594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.320608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.330520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.330581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.330596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.330603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.330608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.330622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.340531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.340591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.340605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.340613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.340619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.340632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.350563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.350624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.350639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.350645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.350651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.551 [2024-07-15 13:02:26.350665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.551 qpair failed and we were unable to recover it. 00:27:55.551 [2024-07-15 13:02:26.360633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.551 [2024-07-15 13:02:26.360687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.551 [2024-07-15 13:02:26.360701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.551 [2024-07-15 13:02:26.360708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.551 [2024-07-15 13:02:26.360714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.360728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.370634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.370710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.370725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.370731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.370737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.370751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.380687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.380746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.380760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.380766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.380772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.380786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.390685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.390746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.390761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.390767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.390773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.390787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.400721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.400787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.400802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.400808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.400814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.400828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.410732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.410793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.410807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.410816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.410822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.410837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.420755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.420813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.420827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.420834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.420839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.420854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.430848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.430919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.430933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.430940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.430946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.430959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.440832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.440891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.440906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.440913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.440918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.440932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.450867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.450942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.450956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.450963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.450968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.450982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.460945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.461001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.461015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.461022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.461028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.461041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.470959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.471018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.471033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.471039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.471045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.471058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.480956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.481008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.481022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.481028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.481034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.481048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.490986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.491043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.491057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.491064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.491069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.491083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.552 [2024-07-15 13:02:26.501022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.552 [2024-07-15 13:02:26.501079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.552 [2024-07-15 13:02:26.501096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.552 [2024-07-15 13:02:26.501102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.552 [2024-07-15 13:02:26.501108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.552 [2024-07-15 13:02:26.501122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.552 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.511069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.511126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.511140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.511147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.511152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.511167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.521116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.521196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.521210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.521217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.521223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.521241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.531115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.531173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.531187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.531193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.531199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.531213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.541165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.541236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.541251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.541257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.541263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.541280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.551174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.551235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.551249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.551256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.551262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.551276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.561201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.561261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.561275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.561281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.561287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.561301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.571269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.571353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.571367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.571373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.571379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.571393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.581267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.581326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.581340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.581346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.581352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.581366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.591282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.591342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.591360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.591366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.591372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.591386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.601345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.601414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.601428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.601434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.601440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.601454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.611349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.611419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.611433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.611440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.611445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.611458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.621369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.814 [2024-07-15 13:02:26.621430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.814 [2024-07-15 13:02:26.621444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.814 [2024-07-15 13:02:26.621451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.814 [2024-07-15 13:02:26.621457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.814 [2024-07-15 13:02:26.621471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.814 qpair failed and we were unable to recover it. 00:27:55.814 [2024-07-15 13:02:26.631391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.631443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.631457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.631464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.631473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.631487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.641428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.641487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.641501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.641508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.641513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.641527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.651470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.651529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.651543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.651550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.651555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.651569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.661546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.661607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.661621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.661628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.661633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.661647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.671502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.671606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.671619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.671626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.671632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.671646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.681536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.681594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.681608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.681615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.681621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.681634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.691564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.691620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.691634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.691640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.691647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.691661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.701611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.701671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.701685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.701691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.701697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.701711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.711607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.711665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.711679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.711686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.711692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.711706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.721643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.721699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.721714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.721721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.721729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.721743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.731709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.731783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.731797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.731803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.731809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.731823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.741737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.741800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.741814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.741820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.741826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.741840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.751726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.751783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.751796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.751803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.751808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.751822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:55.815 [2024-07-15 13:02:26.761772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.815 [2024-07-15 13:02:26.761827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.815 [2024-07-15 13:02:26.761841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.815 [2024-07-15 13:02:26.761847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.815 [2024-07-15 13:02:26.761853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:55.815 [2024-07-15 13:02:26.761867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.815 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-15 13:02:26.771808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.076 [2024-07-15 13:02:26.771905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.076 [2024-07-15 13:02:26.771918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.076 [2024-07-15 13:02:26.771925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.076 [2024-07-15 13:02:26.771930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.076 [2024-07-15 13:02:26.771944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-15 13:02:26.781835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.076 [2024-07-15 13:02:26.781919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.076 [2024-07-15 13:02:26.781933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.076 [2024-07-15 13:02:26.781939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.076 [2024-07-15 13:02:26.781945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.076 [2024-07-15 13:02:26.781958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-15 13:02:26.791845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.076 [2024-07-15 13:02:26.791904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.076 [2024-07-15 13:02:26.791919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.076 [2024-07-15 13:02:26.791925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.076 [2024-07-15 13:02:26.791931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.076 [2024-07-15 13:02:26.791945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-15 13:02:26.801861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.076 [2024-07-15 13:02:26.801924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.076 [2024-07-15 13:02:26.801939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.076 [2024-07-15 13:02:26.801946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.076 [2024-07-15 13:02:26.801952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.076 [2024-07-15 13:02:26.801966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.076 qpair failed and we were unable to recover it. 00:27:56.076 [2024-07-15 13:02:26.811901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.076 [2024-07-15 13:02:26.811956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.076 [2024-07-15 13:02:26.811970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.076 [2024-07-15 13:02:26.811983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.811989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.812003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.821926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.821982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.821996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.822003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.822009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.822023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.831938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.831997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.832010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.832017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.832023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.832037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.841923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.841980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.841994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.842001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.842007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.842021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.852029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.852086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.852100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.852107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.852112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.852126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.861995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.862058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.862073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.862080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.862086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.862100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.872077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.872137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.872151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.872158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.872164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.872178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.882079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.882145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.882160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.882167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.882173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.882187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.892059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.892121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.892136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.892143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.892149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.892164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.902213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.902284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.902302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.902309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.902315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.902328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.912189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.912252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.912266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.912273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.912279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.912293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.922233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.922291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.922305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.922311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.922318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.922332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.932186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.932248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.932263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.932269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.077 [2024-07-15 13:02:26.932275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.077 [2024-07-15 13:02:26.932290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.077 qpair failed and we were unable to recover it. 00:27:56.077 [2024-07-15 13:02:26.942312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.077 [2024-07-15 13:02:26.942392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.077 [2024-07-15 13:02:26.942406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.077 [2024-07-15 13:02:26.942413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:26.942418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:26.942435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:26.952303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:26.952361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:26.952375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:26.952382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:26.952388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:26.952402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:26.962336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:26.962395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:26.962409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:26.962415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:26.962421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:26.962435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:26.972377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:26.972440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:26.972454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:26.972460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:26.972466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:26.972480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:26.982396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:26.982496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:26.982510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:26.982517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:26.982523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:26.982537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:26.992426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:26.992482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:26.992499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:26.992506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:26.992511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:26.992526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:27.002456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:27.002509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:27.002523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:27.002529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:27.002535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:27.002549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:27.012516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:27.012735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:27.012751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:27.012758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:27.012764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:27.012778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.078 [2024-07-15 13:02:27.022521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.078 [2024-07-15 13:02:27.022577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.078 [2024-07-15 13:02:27.022591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.078 [2024-07-15 13:02:27.022597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.078 [2024-07-15 13:02:27.022603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.078 [2024-07-15 13:02:27.022617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.078 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.032600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.032657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.032671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.032678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.032687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.032702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.042513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.042572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.042586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.042592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.042598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.042612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.052618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.052681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.052695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.052701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.052707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.052721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.062627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.062684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.062697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.062705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.062711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.062725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.072602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.072659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.072673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.072680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.072686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.072699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.082627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.082689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.082704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.082711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.082717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.082731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.092716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.092779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.092793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.092800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.092806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.092820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.102736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.102800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.102815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.102822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.102828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.102841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.112745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.112802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.112816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.112823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.112829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.112843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.122744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.122803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.122817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.122824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.122833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.122847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.132824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.132884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.132898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.132905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.132911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.339 [2024-07-15 13:02:27.132926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.339 qpair failed and we were unable to recover it. 00:27:56.339 [2024-07-15 13:02:27.142881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.339 [2024-07-15 13:02:27.142942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.339 [2024-07-15 13:02:27.142957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.339 [2024-07-15 13:02:27.142963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.339 [2024-07-15 13:02:27.142969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.142983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.152871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.152938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.152952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.152958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.152964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.152977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.162937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.163026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.163041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.163047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.163053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.163068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.172983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.173043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.173058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.173064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.173070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.173085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.182994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.183056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.183071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.183077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.183083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.183097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.193020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.193077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.193092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.193099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.193104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.193119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.203069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.203142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.203157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.203163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.203169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.203183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.213098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.213155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.213169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.213180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.213186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.213200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.223103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.223166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.223180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.223187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.223193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.223207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.233132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.233220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.233238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.233245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.233251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.233266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.243236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.243290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.243304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.243311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.243317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.243330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.253258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.253368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.253390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.253397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.253403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.253417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.263235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.263297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.263312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.263318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.263324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.263339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.273257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.273317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.273332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.273338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.273344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.273359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.340 [2024-07-15 13:02:27.283278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.340 [2024-07-15 13:02:27.283334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.340 [2024-07-15 13:02:27.283349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.340 [2024-07-15 13:02:27.283355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.340 [2024-07-15 13:02:27.283361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.340 [2024-07-15 13:02:27.283375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.340 qpair failed and we were unable to recover it. 00:27:56.601 [2024-07-15 13:02:27.293271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.601 [2024-07-15 13:02:27.293354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.601 [2024-07-15 13:02:27.293368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.601 [2024-07-15 13:02:27.293375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.601 [2024-07-15 13:02:27.293381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.601 [2024-07-15 13:02:27.293395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-07-15 13:02:27.303364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.601 [2024-07-15 13:02:27.303451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.601 [2024-07-15 13:02:27.303468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.601 [2024-07-15 13:02:27.303475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.601 [2024-07-15 13:02:27.303480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.601 [2024-07-15 13:02:27.303495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.601 [2024-07-15 13:02:27.313384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.601 [2024-07-15 13:02:27.313447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.601 [2024-07-15 13:02:27.313462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.601 [2024-07-15 13:02:27.313468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.601 [2024-07-15 13:02:27.313474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.601 [2024-07-15 13:02:27.313488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.601 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.323410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.323467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.323481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.323487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.323493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.323507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.333437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.333498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.333512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.333518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.333524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.333538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.343492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.343580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.343594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.343601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.343606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.343625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.353420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.353482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.353496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.353502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.353508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.353522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.363502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.363561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.363575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.363582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.363588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.363601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.373488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.373546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.373560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.373566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.373572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.373587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.383532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.383619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.383634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.383640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.383646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.383660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.393586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.393644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.393662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.393669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.393674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.393688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.403557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.403614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.403629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.403635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.403641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.403655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.413673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.413751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.413766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.413774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.413781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.413795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.423605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.423662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.423676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.423683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.423688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.423702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.433714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.433769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.433783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.433789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.433795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.433812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.443742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.443800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.443814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.443821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.443826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.443840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.453798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.453859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.602 [2024-07-15 13:02:27.453873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.602 [2024-07-15 13:02:27.453880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.602 [2024-07-15 13:02:27.453886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.602 [2024-07-15 13:02:27.453901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.602 qpair failed and we were unable to recover it. 00:27:56.602 [2024-07-15 13:02:27.463730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.602 [2024-07-15 13:02:27.463799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.463814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.463821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.463826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.463840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.473843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.473900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.473914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.473920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.473926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.473940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.483893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.483955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.483970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.483976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.483982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.483996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.493833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.493920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.493935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.493941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.493947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.493962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.503902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.503965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.503979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.503986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.503992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.504007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.513995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.514059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.514073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.514079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.514085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.514099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.523994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.524052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.524066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.524073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.524082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.524096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.534017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.534074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.534090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.534097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.534104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.534119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.544091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.544153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.544167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.544174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.544180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.544194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.603 [2024-07-15 13:02:27.554054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.603 [2024-07-15 13:02:27.554113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.603 [2024-07-15 13:02:27.554127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.603 [2024-07-15 13:02:27.554133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.603 [2024-07-15 13:02:27.554139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.603 [2024-07-15 13:02:27.554153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.603 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.564094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.564151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.564166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.564173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.564179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.564193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.574121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.574186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.574200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.574206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.574212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.574230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.584133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.584213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.584232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.584239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.584245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.584259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.594185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.594250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.594265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.594271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.594277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.594291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.604203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.604265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.604279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.604286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.604292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.604306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.614239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.614295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.614309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.614319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.614325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.614339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.624300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.624361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.624375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.624382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.624388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.624402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.634222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.634286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.634301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.634308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.634314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.634328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.644317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.644374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.644388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.644395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.644401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.644414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.654349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.654439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.654453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.654459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.654465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.654480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.664369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.664426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.664440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.664447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.664453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.664467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.674399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.674457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.864 [2024-07-15 13:02:27.674471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.864 [2024-07-15 13:02:27.674478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.864 [2024-07-15 13:02:27.674484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.864 [2024-07-15 13:02:27.674498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.864 qpair failed and we were unable to recover it. 00:27:56.864 [2024-07-15 13:02:27.684433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.864 [2024-07-15 13:02:27.684494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.684508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.684514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.684520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.684533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.694464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.694523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.694538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.694544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.694550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.694564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.704488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.704570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.704584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.704593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.704599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.704612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.714509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.714570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.714584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.714590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.714596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.714610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.724539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.724600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.724614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.724621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.724627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.724641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.734574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.734631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.734645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.734652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.734658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.734672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.744609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.744668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.744682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.744689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.744694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.744708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.754668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.754728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.754742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.754749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.754755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.754768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.764678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.764736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.764749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.764756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.764763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.764776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.774724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.774802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.774816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.774822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.774828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.774842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.784726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.784799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.784813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.784819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.784825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.784838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.794748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.794810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.794826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.794833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.794839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.794852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.804780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.804838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.804852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.804859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.804865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.804878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:56.865 [2024-07-15 13:02:27.814813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.865 [2024-07-15 13:02:27.814874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.865 [2024-07-15 13:02:27.814888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.865 [2024-07-15 13:02:27.814894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.865 [2024-07-15 13:02:27.814900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:56.865 [2024-07-15 13:02:27.814913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.865 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.824830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.824891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.824905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.824913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.824919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.824933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.834853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.834908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.834922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.834929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.834935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.834952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.844909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.844966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.844980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.844987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.844993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.845006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.854923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.855011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.855025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.855031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.855037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.855052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.864991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.865053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.865067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.865074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.865080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.865094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.874972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.875027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.875042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.875048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.875054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.875069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.885009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.885064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.885081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.885088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.885094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.885107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.895033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.895093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.895107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.895114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.895120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.895134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.905105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.905165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.905179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.905186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.905192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.905206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.915081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.915141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.915155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.915162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.915167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.915181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.925112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.925170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.925184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.925191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.925199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.925213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.935150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.935207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.126 [2024-07-15 13:02:27.935222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.126 [2024-07-15 13:02:27.935233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.126 [2024-07-15 13:02:27.935239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.126 [2024-07-15 13:02:27.935253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.126 qpair failed and we were unable to recover it. 00:27:57.126 [2024-07-15 13:02:27.945251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.126 [2024-07-15 13:02:27.945321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:27.945336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:27.945343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:27.945349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:27.945363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:27.955181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:27.955247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:27.955261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:27.955268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:27.955274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:27.955288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:27.965275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:27.965331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:27.965345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:27.965353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:27.965359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:27.965373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:27.975231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:27.975294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:27.975308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:27.975315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:27.975321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:27.975336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:27.985320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:27.985386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:27.985400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:27.985407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:27.985413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:27.985427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:27.995319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:27.995382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:27.995396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:27.995402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:27.995408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:27.995422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.005321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.005385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.005400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.005407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.005412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.005427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.015390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.015477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.015491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.015500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.015506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.015522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.025401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.025465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.025479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.025485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.025491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.025505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.035433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.035491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.035505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.035511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.035517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.035531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.045404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.045462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.045476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.045484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.045489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.045503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.055505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.055562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.055576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.055583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.055588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.055602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.065527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.065592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.065606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.065613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.065618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.065632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.127 [2024-07-15 13:02:28.075547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.127 [2024-07-15 13:02:28.075604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.127 [2024-07-15 13:02:28.075618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.127 [2024-07-15 13:02:28.075625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.127 [2024-07-15 13:02:28.075630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.127 [2024-07-15 13:02:28.075644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.127 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.085514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.085571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.388 [2024-07-15 13:02:28.085585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.388 [2024-07-15 13:02:28.085592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.388 [2024-07-15 13:02:28.085598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.388 [2024-07-15 13:02:28.085613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.388 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.095606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.095665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.388 [2024-07-15 13:02:28.095679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.388 [2024-07-15 13:02:28.095686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.388 [2024-07-15 13:02:28.095691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.388 [2024-07-15 13:02:28.095706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.388 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.105618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.105673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.388 [2024-07-15 13:02:28.105688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.388 [2024-07-15 13:02:28.105697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.388 [2024-07-15 13:02:28.105703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.388 [2024-07-15 13:02:28.105717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.388 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.115660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.115720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.388 [2024-07-15 13:02:28.115734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.388 [2024-07-15 13:02:28.115740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.388 [2024-07-15 13:02:28.115746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.388 [2024-07-15 13:02:28.115760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.388 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.125696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.125749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.388 [2024-07-15 13:02:28.125764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.388 [2024-07-15 13:02:28.125770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.388 [2024-07-15 13:02:28.125776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.388 [2024-07-15 13:02:28.125790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.388 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.135722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.135782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.388 [2024-07-15 13:02:28.135796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.388 [2024-07-15 13:02:28.135802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.388 [2024-07-15 13:02:28.135808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.388 [2024-07-15 13:02:28.135822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.388 qpair failed and we were unable to recover it. 00:27:57.388 [2024-07-15 13:02:28.145683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.388 [2024-07-15 13:02:28.145742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.145756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.145762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.145768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.145782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.155774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.155829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.155843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.155849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.155855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.155869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.165806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.165860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.165874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.165881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.165887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.165900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.175842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.175901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.175915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.175921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.175927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.175941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.185858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.185914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.185928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.185934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.185941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.185954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.195920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.195976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.195993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.195999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.196005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.196019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.205873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.205960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.205974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.205980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.205986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.206000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.215954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.216010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.216024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.216031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.216037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.216051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.225968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.226025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.226040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.226047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.226052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.226067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.236051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.236149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.236163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.236170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.236176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.236193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.246020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.246074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.246089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.246096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.246101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.246115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.256069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.256135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.256149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.256156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.256162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.256176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.266103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.266160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.266174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.266180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.266186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.266200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.276139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.276191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.276205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.276212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.389 [2024-07-15 13:02:28.276218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.389 [2024-07-15 13:02:28.276235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.389 qpair failed and we were unable to recover it. 00:27:57.389 [2024-07-15 13:02:28.286060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.389 [2024-07-15 13:02:28.286117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.389 [2024-07-15 13:02:28.286138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.389 [2024-07-15 13:02:28.286144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.390 [2024-07-15 13:02:28.286150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.390 [2024-07-15 13:02:28.286164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.390 qpair failed and we were unable to recover it. 00:27:57.390 [2024-07-15 13:02:28.296106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.390 [2024-07-15 13:02:28.296163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.390 [2024-07-15 13:02:28.296177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.390 [2024-07-15 13:02:28.296184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.390 [2024-07-15 13:02:28.296190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.390 [2024-07-15 13:02:28.296204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.390 qpair failed and we were unable to recover it. 00:27:57.390 [2024-07-15 13:02:28.306180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.390 [2024-07-15 13:02:28.306238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.390 [2024-07-15 13:02:28.306252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.390 [2024-07-15 13:02:28.306259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.390 [2024-07-15 13:02:28.306264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.390 [2024-07-15 13:02:28.306279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.390 qpair failed and we were unable to recover it. 00:27:57.390 [2024-07-15 13:02:28.316261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.390 [2024-07-15 13:02:28.316323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.390 [2024-07-15 13:02:28.316336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.390 [2024-07-15 13:02:28.316342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.390 [2024-07-15 13:02:28.316348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.390 [2024-07-15 13:02:28.316362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.390 qpair failed and we were unable to recover it. 00:27:57.390 [2024-07-15 13:02:28.326249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.390 [2024-07-15 13:02:28.326304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.390 [2024-07-15 13:02:28.326319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.390 [2024-07-15 13:02:28.326326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.390 [2024-07-15 13:02:28.326335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.390 [2024-07-15 13:02:28.326349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.390 qpair failed and we were unable to recover it. 00:27:57.390 [2024-07-15 13:02:28.336297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.390 [2024-07-15 13:02:28.336359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.390 [2024-07-15 13:02:28.336373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.390 [2024-07-15 13:02:28.336380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.390 [2024-07-15 13:02:28.336385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.390 [2024-07-15 13:02:28.336399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.390 qpair failed and we were unable to recover it. 00:27:57.650 [2024-07-15 13:02:28.346293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.650 [2024-07-15 13:02:28.346353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.650 [2024-07-15 13:02:28.346368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.650 [2024-07-15 13:02:28.346375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.650 [2024-07-15 13:02:28.346381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.650 [2024-07-15 13:02:28.346395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.650 qpair failed and we were unable to recover it. 00:27:57.650 [2024-07-15 13:02:28.356326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.650 [2024-07-15 13:02:28.356385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.650 [2024-07-15 13:02:28.356399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.650 [2024-07-15 13:02:28.356406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.650 [2024-07-15 13:02:28.356411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.650 [2024-07-15 13:02:28.356426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.650 qpair failed and we were unable to recover it. 00:27:57.650 [2024-07-15 13:02:28.366364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.650 [2024-07-15 13:02:28.366450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.650 [2024-07-15 13:02:28.366465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.650 [2024-07-15 13:02:28.366471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.650 [2024-07-15 13:02:28.366477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.650 [2024-07-15 13:02:28.366491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.650 qpair failed and we were unable to recover it. 00:27:57.650 [2024-07-15 13:02:28.376429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.376491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.376505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.376512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.376518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.376532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.386381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.386445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.386459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.386466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.386471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.386485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.396444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.396499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.396514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.396521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.396526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.396541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.406478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.406537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.406551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.406557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.406563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.406577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.416509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.416567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.416581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.416588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.416597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.416610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.426477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.426541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.426555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.426562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.426568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.426582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.436510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.436594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.436609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.436615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.436621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.436635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.446567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.446624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.446639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.446645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.446651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.446665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.456599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.456656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.456670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.456676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.456682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.456696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.466565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.466625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.466640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.466647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.466652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.466666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.476668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.476725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.476740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.476746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.476752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.476766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.486678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.486735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.486748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.486755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.651 [2024-07-15 13:02:28.486761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.651 [2024-07-15 13:02:28.486775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.651 qpair failed and we were unable to recover it. 00:27:57.651 [2024-07-15 13:02:28.496737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.651 [2024-07-15 13:02:28.496796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.651 [2024-07-15 13:02:28.496811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.651 [2024-07-15 13:02:28.496817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.496823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.496837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.506763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.506823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.506838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.506849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.506855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.506869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.516721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.516781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.516795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.516801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.516807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.516820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.526755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.526813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.526827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.526834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.526839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.526853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.536775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.536837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.536852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.536859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.536864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.536878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.546913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.546989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.547004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.547010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.547016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.547030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.556942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.557007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.557021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.557027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.557033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.557047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.566917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.566978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.566993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.567000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.567006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.567020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.576947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.577026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.577042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.577050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.577057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.577072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.586944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.587040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.587055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.587061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.587067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.587082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.652 [2024-07-15 13:02:28.597000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.652 [2024-07-15 13:02:28.597062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.652 [2024-07-15 13:02:28.597080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.652 [2024-07-15 13:02:28.597086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.652 [2024-07-15 13:02:28.597092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.652 [2024-07-15 13:02:28.597106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.652 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.607009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.607096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.607110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.607117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.607123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.607137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.617075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.617136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.617151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.617157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.617163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.617177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.627101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.627157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.627171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.627178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.627183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.627197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.637169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.637233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.637248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.637254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.637260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.637278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.647145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.647201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.647216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.647222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.647233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.647247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.657142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.657200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.657214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.657221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.657231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.657246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.667160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.667221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.667238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.667245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.667251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.913 [2024-07-15 13:02:28.667265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.913 qpair failed and we were unable to recover it. 00:27:57.913 [2024-07-15 13:02:28.677187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.913 [2024-07-15 13:02:28.677275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.913 [2024-07-15 13:02:28.677290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.913 [2024-07-15 13:02:28.677297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.913 [2024-07-15 13:02:28.677303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.677318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.687303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.687362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.687380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.687386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.687392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.687406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.697319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.697380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.697394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.697400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.697406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.697420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.707270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.707326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.707339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.707346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.707352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.707367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.717315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.717373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.717388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.717394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.717400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.717414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.727401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.727485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.727499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.727505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.727514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.727530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.737482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.737541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.737555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.737562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.737568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.737582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.747475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.747543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.747557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.747563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.747569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.747583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.757485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.757544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.757560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.757566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.757572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.757587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.767533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.767609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.767623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.767629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.767635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.767649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.777536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.777598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.777613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.777620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.777625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.777640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.787596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.787696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.787710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.787717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.787723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.914 [2024-07-15 13:02:28.787737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.914 qpair failed and we were unable to recover it. 00:27:57.914 [2024-07-15 13:02:28.797547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.914 [2024-07-15 13:02:28.797609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.914 [2024-07-15 13:02:28.797623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.914 [2024-07-15 13:02:28.797630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.914 [2024-07-15 13:02:28.797635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.797649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:57.915 [2024-07-15 13:02:28.807717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.915 [2024-07-15 13:02:28.807776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.915 [2024-07-15 13:02:28.807791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.915 [2024-07-15 13:02:28.807798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.915 [2024-07-15 13:02:28.807804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.807817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:57.915 [2024-07-15 13:02:28.817672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.915 [2024-07-15 13:02:28.817729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.915 [2024-07-15 13:02:28.817743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.915 [2024-07-15 13:02:28.817749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.915 [2024-07-15 13:02:28.817759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.817773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:57.915 [2024-07-15 13:02:28.827721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.915 [2024-07-15 13:02:28.827782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.915 [2024-07-15 13:02:28.827796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.915 [2024-07-15 13:02:28.827803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.915 [2024-07-15 13:02:28.827809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.827823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:57.915 [2024-07-15 13:02:28.837746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.915 [2024-07-15 13:02:28.837803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.915 [2024-07-15 13:02:28.837816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.915 [2024-07-15 13:02:28.837823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.915 [2024-07-15 13:02:28.837829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.837842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:57.915 [2024-07-15 13:02:28.847761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.915 [2024-07-15 13:02:28.847838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.915 [2024-07-15 13:02:28.847852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.915 [2024-07-15 13:02:28.847859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.915 [2024-07-15 13:02:28.847865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.847879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:57.915 [2024-07-15 13:02:28.857752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.915 [2024-07-15 13:02:28.857809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.915 [2024-07-15 13:02:28.857822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.915 [2024-07-15 13:02:28.857829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.915 [2024-07-15 13:02:28.857835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:57.915 [2024-07-15 13:02:28.857849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.915 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-15 13:02:28.867872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.175 [2024-07-15 13:02:28.867933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.175 [2024-07-15 13:02:28.867947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.175 [2024-07-15 13:02:28.867954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.175 [2024-07-15 13:02:28.867959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.175 [2024-07-15 13:02:28.867973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-15 13:02:28.877865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.175 [2024-07-15 13:02:28.877921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.175 [2024-07-15 13:02:28.877935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.175 [2024-07-15 13:02:28.877941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.175 [2024-07-15 13:02:28.877947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.175 [2024-07-15 13:02:28.877962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.175 qpair failed and we were unable to recover it. 00:27:58.175 [2024-07-15 13:02:28.887946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.175 [2024-07-15 13:02:28.888033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.888047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.888053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.888059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.888073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.897957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.898058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.898072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.898078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.898084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.898098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.907895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.907953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.907967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.907977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.907983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.907998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.917908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.917966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.917981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.917988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.917994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.918008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.927985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.928041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.928055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.928062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.928069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.928083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.938016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.938080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.938094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.938101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.938106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.938121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.948091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.948154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.948168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.948175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.948181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.948194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.958090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.958147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.958161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.958167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.958173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.958187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.968035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.968087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.968101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.968108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.968114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.968129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.978140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.978203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.978217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.978223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.978233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.978247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.988154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.988216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.988234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.988242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.988248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.988262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:28.998193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:28.998253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:28.998271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:28.998278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:28.998284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:28.998298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:29.008207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:29.008267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:29.008282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:29.008289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:29.008294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:29.008309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:29.018256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:29.018318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:29.018332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:29.018339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.176 [2024-07-15 13:02:29.018345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.176 [2024-07-15 13:02:29.018359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.176 qpair failed and we were unable to recover it. 00:27:58.176 [2024-07-15 13:02:29.028220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.176 [2024-07-15 13:02:29.028287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.176 [2024-07-15 13:02:29.028301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.176 [2024-07-15 13:02:29.028307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.028313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.028328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.038288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.038343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.038357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.038364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.038370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.038390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.048349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.048409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.048423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.048430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.048437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.048451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.058363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.058423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.058436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.058443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.058448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.058462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.068415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.068468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.068483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.068489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.068495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.068509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.078441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.078495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.078509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.078516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.078522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.078536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.088405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.088465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.088482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.088489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.088494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.088508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.098519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.098575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.098590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.098597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.098602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.098616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.108550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.108607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.108622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.108628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.108634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.108648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.177 [2024-07-15 13:02:29.118593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.177 [2024-07-15 13:02:29.118649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.177 [2024-07-15 13:02:29.118663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.177 [2024-07-15 13:02:29.118669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.177 [2024-07-15 13:02:29.118675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.177 [2024-07-15 13:02:29.118690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.177 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.128563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.128620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.128634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.128641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.128646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.128664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.138599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.138660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.138674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.138680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.138686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.138700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.148637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.148697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.148711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.148718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.148723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.148737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.158648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.158713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.158727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.158734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.158739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.158753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.168646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.168702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.168716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.168723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.168729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.168742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.178725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.178786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.178799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.178806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.178812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.178826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.188746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.437 [2024-07-15 13:02:29.188802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.437 [2024-07-15 13:02:29.188816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.437 [2024-07-15 13:02:29.188823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.437 [2024-07-15 13:02:29.188829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.437 [2024-07-15 13:02:29.188843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.437 qpair failed and we were unable to recover it. 00:27:58.437 [2024-07-15 13:02:29.198787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.198849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.198863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.198870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.198875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.198890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.208807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.208862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.208876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.208883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.208888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.208902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.218886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.218961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.218975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.218982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.218990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.219005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.228861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.228922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.228936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.228943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.228949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.228963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.238889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.238951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.238966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.238973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.238978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.238992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.248950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.249008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.249022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.249029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.249035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.249049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.258957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.259015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.259029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.259036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.259041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.259055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.268978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.269038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.269052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.269059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.269064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.269078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.279001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.279061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.279075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.279082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.279087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.279101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.289026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.289083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.289097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.289104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.289110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.289124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.299070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.299134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.299148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.299155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.299160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.299175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.309089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.309144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.309159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.309169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.309174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.309188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.319122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.319180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.319194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.319201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.319207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.319221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.329141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.329204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.438 [2024-07-15 13:02:29.329217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.438 [2024-07-15 13:02:29.329227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.438 [2024-07-15 13:02:29.329233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.438 [2024-07-15 13:02:29.329247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.438 qpair failed and we were unable to recover it. 00:27:58.438 [2024-07-15 13:02:29.339171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.438 [2024-07-15 13:02:29.339234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.439 [2024-07-15 13:02:29.339248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.439 [2024-07-15 13:02:29.339255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.439 [2024-07-15 13:02:29.339261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.439 [2024-07-15 13:02:29.339275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.439 qpair failed and we were unable to recover it. 00:27:58.439 [2024-07-15 13:02:29.349219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.439 [2024-07-15 13:02:29.349327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.439 [2024-07-15 13:02:29.349342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.439 [2024-07-15 13:02:29.349348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.439 [2024-07-15 13:02:29.349354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.439 [2024-07-15 13:02:29.349369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.439 qpair failed and we were unable to recover it. 00:27:58.439 [2024-07-15 13:02:29.359214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.439 [2024-07-15 13:02:29.359275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.439 [2024-07-15 13:02:29.359289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.439 [2024-07-15 13:02:29.359296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.439 [2024-07-15 13:02:29.359301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.439 [2024-07-15 13:02:29.359315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.439 qpair failed and we were unable to recover it. 00:27:58.439 [2024-07-15 13:02:29.369262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.439 [2024-07-15 13:02:29.369320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.439 [2024-07-15 13:02:29.369334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.439 [2024-07-15 13:02:29.369341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.439 [2024-07-15 13:02:29.369347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.439 [2024-07-15 13:02:29.369361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.439 qpair failed and we were unable to recover it. 00:27:58.439 [2024-07-15 13:02:29.379292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.439 [2024-07-15 13:02:29.379351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.439 [2024-07-15 13:02:29.379365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.439 [2024-07-15 13:02:29.379371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.439 [2024-07-15 13:02:29.379377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.439 [2024-07-15 13:02:29.379390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.439 qpair failed and we were unable to recover it. 00:27:58.439 [2024-07-15 13:02:29.389322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.439 [2024-07-15 13:02:29.389377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.439 [2024-07-15 13:02:29.389391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.439 [2024-07-15 13:02:29.389397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.439 [2024-07-15 13:02:29.389403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.439 [2024-07-15 13:02:29.389417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.439 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.399357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.399421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.399435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.399444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.399450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.399464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.409341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.409411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.409425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.409432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.409438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.409452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.419456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.419513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.419527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.419533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.419539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.419553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.429428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.429490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.429504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.429511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.429517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.429531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.439405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.439462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.439476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.439482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.439488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.439501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.449428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.449489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.449503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.449510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.449516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.449531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.459526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.459587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.459601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.459607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.459613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.459627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.469570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.469653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.469667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.469674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.469680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.700 [2024-07-15 13:02:29.469694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.700 qpair failed and we were unable to recover it. 00:27:58.700 [2024-07-15 13:02:29.479646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.700 [2024-07-15 13:02:29.479755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.700 [2024-07-15 13:02:29.479770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.700 [2024-07-15 13:02:29.479776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.700 [2024-07-15 13:02:29.479782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.479797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.489615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.489670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.489688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.489694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.489700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.489714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.499642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.499698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.499712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.499719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.499725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.499739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.509708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.509771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.509785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.509792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.509798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.509812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.519697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.519754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.519768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.519775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.519781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.519796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.529723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.529831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.529845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.529851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.529857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.529874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.539754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.539815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.539829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.539836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.539841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.539855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.549778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.549834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.549848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.549855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.549861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.549874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.559798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.559861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.559875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.559881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.559887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.559901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.569837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.569898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.569912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.569918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.569924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.569938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.579862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.579921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.579939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.579946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.579951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.579965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.589899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.589963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.589978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.589984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.589990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.590004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.599927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.599987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.600002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.600008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.600014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.600028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.609998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.610057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.610072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.610078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.610084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.610098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.620024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.701 [2024-07-15 13:02:29.620100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.701 [2024-07-15 13:02:29.620113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.701 [2024-07-15 13:02:29.620120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.701 [2024-07-15 13:02:29.620128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.701 [2024-07-15 13:02:29.620143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.701 qpair failed and we were unable to recover it. 00:27:58.701 [2024-07-15 13:02:29.630040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.702 [2024-07-15 13:02:29.630095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.702 [2024-07-15 13:02:29.630109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.702 [2024-07-15 13:02:29.630116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.702 [2024-07-15 13:02:29.630122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.702 [2024-07-15 13:02:29.630136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.702 qpair failed and we were unable to recover it. 00:27:58.702 [2024-07-15 13:02:29.640039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.702 [2024-07-15 13:02:29.640097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.702 [2024-07-15 13:02:29.640111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.702 [2024-07-15 13:02:29.640118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.702 [2024-07-15 13:02:29.640124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.702 [2024-07-15 13:02:29.640138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.702 qpair failed and we were unable to recover it. 00:27:58.702 [2024-07-15 13:02:29.650069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.702 [2024-07-15 13:02:29.650170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.702 [2024-07-15 13:02:29.650184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.702 [2024-07-15 13:02:29.650190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.702 [2024-07-15 13:02:29.650197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.702 [2024-07-15 13:02:29.650212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.702 qpair failed and we were unable to recover it. 00:27:58.962 [2024-07-15 13:02:29.660141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.962 [2024-07-15 13:02:29.660199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.962 [2024-07-15 13:02:29.660213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.660220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.660228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.660244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.670119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.670185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.670199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.670206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.670211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.670228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.680128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.680187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.680201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.680208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.680214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.680233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.690192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.690248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.690262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.690269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.690274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.690289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.700217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.700283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.700298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.700305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.700310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.700324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.710162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.710217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.710234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.710244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.710250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.710264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.720254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.720307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.720321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.720328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.720334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.720347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.730318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.730375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.730389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.730395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.730401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.730415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.740325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.740387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.740401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.740408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.740414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.740428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.750362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.750429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.750442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.750449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.750454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.750469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.760368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.760425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.760440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.760447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.760453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.760467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.770408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.770463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.770478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.770484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.770490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.770504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.780437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.780492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.780506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.780513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.780519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.780532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.790456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.790518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.790533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.790539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.790545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.963 [2024-07-15 13:02:29.790559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.963 qpair failed and we were unable to recover it. 00:27:58.963 [2024-07-15 13:02:29.800547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.963 [2024-07-15 13:02:29.800612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.963 [2024-07-15 13:02:29.800627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.963 [2024-07-15 13:02:29.800636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.963 [2024-07-15 13:02:29.800642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.800656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.810449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.810510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.810525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.810532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.810538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.810552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.820547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.820631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.820645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.820651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.820657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.820671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.830517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.830579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.830594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.830600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.830607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.830620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.840600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.840661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.840675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.840682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.840688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.840701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.850657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.850725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.850739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.850746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.850752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.850766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.860692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.860771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.860785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.860792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.860797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.860811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.870681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.870736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.870750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.870757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.870762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.870776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.880708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.880772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.880785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.880792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.880798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.880811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.890752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.890806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.890823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.890830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.890836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.890850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.900779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.900837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.900851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.900857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.900863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.900877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:58.964 [2024-07-15 13:02:29.910839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.964 [2024-07-15 13:02:29.910893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.964 [2024-07-15 13:02:29.910907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.964 [2024-07-15 13:02:29.910914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.964 [2024-07-15 13:02:29.910919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:58.964 [2024-07-15 13:02:29.910933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.964 qpair failed and we were unable to recover it. 00:27:59.224 [2024-07-15 13:02:29.920818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.224 [2024-07-15 13:02:29.920877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.224 [2024-07-15 13:02:29.920891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.224 [2024-07-15 13:02:29.920898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.224 [2024-07-15 13:02:29.920905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.224 [2024-07-15 13:02:29.920919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.224 qpair failed and we were unable to recover it. 00:27:59.224 [2024-07-15 13:02:29.930854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.224 [2024-07-15 13:02:29.930910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.224 [2024-07-15 13:02:29.930924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.224 [2024-07-15 13:02:29.930931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.224 [2024-07-15 13:02:29.930936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.224 [2024-07-15 13:02:29.930953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.224 qpair failed and we were unable to recover it. 00:27:59.224 [2024-07-15 13:02:29.940886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.224 [2024-07-15 13:02:29.940947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.224 [2024-07-15 13:02:29.940961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.224 [2024-07-15 13:02:29.940968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.224 [2024-07-15 13:02:29.940974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.224 [2024-07-15 13:02:29.940987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.224 qpair failed and we were unable to recover it. 00:27:59.224 [2024-07-15 13:02:29.950857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.224 [2024-07-15 13:02:29.950917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.224 [2024-07-15 13:02:29.950931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.224 [2024-07-15 13:02:29.950938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.224 [2024-07-15 13:02:29.950944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.224 [2024-07-15 13:02:29.950958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.224 qpair failed and we were unable to recover it. 00:27:59.224 [2024-07-15 13:02:29.960943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.224 [2024-07-15 13:02:29.960999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.224 [2024-07-15 13:02:29.961013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.224 [2024-07-15 13:02:29.961020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.224 [2024-07-15 13:02:29.961026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.224 [2024-07-15 13:02:29.961040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.224 qpair failed and we were unable to recover it. 00:27:59.224 [2024-07-15 13:02:29.970996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.224 [2024-07-15 13:02:29.971054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:29.971068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:29.971075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:29.971080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:29.971094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:29.981027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:29.981137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:29.981154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:29.981161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:29.981166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:29.981181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:29.991013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:29.991074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:29.991088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:29.991095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:29.991101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:29.991115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.001071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.001149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.001164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.001171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.001177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.001191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.011017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.011078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.011093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.011100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.011106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.011120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.021295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.021373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.021392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.021400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.021411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.021428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.031185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.031295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.031311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.031319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.031325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.031340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.041139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.041215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.041234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.041241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.041247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.041262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.051218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.051284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.051299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.051306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.051312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.051326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.061250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.061312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.061327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.061333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.061340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.061355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.071325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.071393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.071407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.071414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.071420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.071434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.081343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.081404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.081419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.081425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.081431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.081446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.091328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.091389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.091403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.091410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.091416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.091430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.101308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.101370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.101386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.101392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.225 [2024-07-15 13:02:30.101398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.225 [2024-07-15 13:02:30.101413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.225 qpair failed and we were unable to recover it. 00:27:59.225 [2024-07-15 13:02:30.111408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.225 [2024-07-15 13:02:30.111468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.225 [2024-07-15 13:02:30.111483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.225 [2024-07-15 13:02:30.111490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.111500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.111514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.226 [2024-07-15 13:02:30.121443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.226 [2024-07-15 13:02:30.121504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.226 [2024-07-15 13:02:30.121518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.226 [2024-07-15 13:02:30.121525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.121531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.121545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.226 [2024-07-15 13:02:30.131442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.226 [2024-07-15 13:02:30.131503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.226 [2024-07-15 13:02:30.131518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.226 [2024-07-15 13:02:30.131524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.131531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.131545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.226 [2024-07-15 13:02:30.141462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.226 [2024-07-15 13:02:30.141531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.226 [2024-07-15 13:02:30.141546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.226 [2024-07-15 13:02:30.141553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.141559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.141573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.226 [2024-07-15 13:02:30.151509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.226 [2024-07-15 13:02:30.151569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.226 [2024-07-15 13:02:30.151583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.226 [2024-07-15 13:02:30.151590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.151596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.151610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.226 [2024-07-15 13:02:30.161468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.226 [2024-07-15 13:02:30.161529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.226 [2024-07-15 13:02:30.161544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.226 [2024-07-15 13:02:30.161551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.161557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.161571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.226 [2024-07-15 13:02:30.171525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.226 [2024-07-15 13:02:30.171585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.226 [2024-07-15 13:02:30.171600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.226 [2024-07-15 13:02:30.171607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.226 [2024-07-15 13:02:30.171612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.226 [2024-07-15 13:02:30.171626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.226 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.181632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.486 [2024-07-15 13:02:30.181715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.486 [2024-07-15 13:02:30.181730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.486 [2024-07-15 13:02:30.181737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.486 [2024-07-15 13:02:30.181743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.486 [2024-07-15 13:02:30.181757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.486 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.191539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.486 [2024-07-15 13:02:30.191638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.486 [2024-07-15 13:02:30.191652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.486 [2024-07-15 13:02:30.191659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.486 [2024-07-15 13:02:30.191665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.486 [2024-07-15 13:02:30.191680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.486 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.201625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.486 [2024-07-15 13:02:30.201729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.486 [2024-07-15 13:02:30.201743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.486 [2024-07-15 13:02:30.201934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.486 [2024-07-15 13:02:30.201940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.486 [2024-07-15 13:02:30.201955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.486 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.211605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.486 [2024-07-15 13:02:30.211660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.486 [2024-07-15 13:02:30.211681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.486 [2024-07-15 13:02:30.211688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.486 [2024-07-15 13:02:30.211694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.486 [2024-07-15 13:02:30.211708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.486 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.221739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.486 [2024-07-15 13:02:30.221811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.486 [2024-07-15 13:02:30.221825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.486 [2024-07-15 13:02:30.221832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.486 [2024-07-15 13:02:30.221838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.486 [2024-07-15 13:02:30.221851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.486 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.231647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.486 [2024-07-15 13:02:30.231705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.486 [2024-07-15 13:02:30.231720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.486 [2024-07-15 13:02:30.231727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.486 [2024-07-15 13:02:30.231732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.486 [2024-07-15 13:02:30.231747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.486 qpair failed and we were unable to recover it. 00:27:59.486 [2024-07-15 13:02:30.241761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.241820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.241835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.241842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.241848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.241862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.251776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.251833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.251881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.251887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.251893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.251907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.261747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.261806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.261820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.261827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.261833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.261847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.271825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.271888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.271902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.271909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.271914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.271928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.281877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.281933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.281947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.281954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.281960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.281974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.291936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.291991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.292009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.292016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.292022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.292036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.301886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.301945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.301959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.301966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.301971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.301985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.311951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.312041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.312056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.312062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.312068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.312082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.322038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.322098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.322112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.322118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.322124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.322138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.332014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.332077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.332093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.332099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.332105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.332122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.342005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.342064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.342078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.342085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.342091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.342104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.352011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.352072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.352087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.352093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.352099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.352113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.362060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.362113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.362127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.362134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.362140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.362155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.372156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.372222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.372242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.372250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.487 [2024-07-15 13:02:30.372256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.487 [2024-07-15 13:02:30.372270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.487 qpair failed and we were unable to recover it. 00:27:59.487 [2024-07-15 13:02:30.382177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.487 [2024-07-15 13:02:30.382239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.487 [2024-07-15 13:02:30.382256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.487 [2024-07-15 13:02:30.382264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.488 [2024-07-15 13:02:30.382270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.488 [2024-07-15 13:02:30.382283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.488 qpair failed and we were unable to recover it. 00:27:59.488 [2024-07-15 13:02:30.392201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.488 [2024-07-15 13:02:30.392263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.488 [2024-07-15 13:02:30.392278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.488 [2024-07-15 13:02:30.392285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.488 [2024-07-15 13:02:30.392290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.488 [2024-07-15 13:02:30.392305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.488 qpair failed and we were unable to recover it. 00:27:59.488 [2024-07-15 13:02:30.402230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.488 [2024-07-15 13:02:30.402281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.488 [2024-07-15 13:02:30.402295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.488 [2024-07-15 13:02:30.402302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.488 [2024-07-15 13:02:30.402308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.488 [2024-07-15 13:02:30.402322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.488 qpair failed and we were unable to recover it. 00:27:59.488 [2024-07-15 13:02:30.412297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.488 [2024-07-15 13:02:30.412376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.488 [2024-07-15 13:02:30.412391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.488 [2024-07-15 13:02:30.412397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.488 [2024-07-15 13:02:30.412403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.488 [2024-07-15 13:02:30.412417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.488 qpair failed and we were unable to recover it. 00:27:59.488 [2024-07-15 13:02:30.422305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.488 [2024-07-15 13:02:30.422361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.488 [2024-07-15 13:02:30.422374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.488 [2024-07-15 13:02:30.422381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.488 [2024-07-15 13:02:30.422390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.488 [2024-07-15 13:02:30.422404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.488 qpair failed and we were unable to recover it. 00:27:59.488 [2024-07-15 13:02:30.432317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.488 [2024-07-15 13:02:30.432375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.488 [2024-07-15 13:02:30.432394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.488 [2024-07-15 13:02:30.432401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.488 [2024-07-15 13:02:30.432407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.488 [2024-07-15 13:02:30.432422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.488 qpair failed and we were unable to recover it. 00:27:59.747 [2024-07-15 13:02:30.442387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.747 [2024-07-15 13:02:30.442448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.747 [2024-07-15 13:02:30.442463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.747 [2024-07-15 13:02:30.442469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.442475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.442489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.452371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.452424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.452444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.452451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.452457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.452471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.462411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.462474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.462489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.462496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.462502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.462517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.472432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.472498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.472514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.472520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.472526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.472540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.482481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.482541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.482555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.482562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.482568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.482581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.492520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.492606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.492620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.492626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.492632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.492646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.502542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.502633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.502648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.502654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.502660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.502675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.512581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.512644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.512659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.512665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.512674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.512688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.522510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.522576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.522591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.522597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.522603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.522617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.532617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.532677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.532691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.532697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.532703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.532717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.542620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.542678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.542692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.542698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.542704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.542718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.552603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.552666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.552681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.552687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.552693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.552707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.562735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.562793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.562807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.562814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.562819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.562834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.572761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.572817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.572832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.572839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.572845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.572859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.748 [2024-07-15 13:02:30.582775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.748 [2024-07-15 13:02:30.582834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.748 [2024-07-15 13:02:30.582848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.748 [2024-07-15 13:02:30.582855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.748 [2024-07-15 13:02:30.582861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.748 [2024-07-15 13:02:30.582875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.748 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.592769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.592830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.592846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.592852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.592858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.592872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.602815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.602872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.602887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.602899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.602905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.602918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.612839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.612897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.612912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.612919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.612925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.612939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.622873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.622928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.622942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.622949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.622955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.622969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.632911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.632970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.632984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.632990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.632996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.633010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.642916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.642969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.642983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.642990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.642995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.643010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.652958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.653013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.653027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.653034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.653040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.653053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.662989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.663048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.663066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.663073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.663079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.663094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.673017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.673075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.673089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.673095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.673101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.673115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.683051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.683107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.683121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.683127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.683133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.683147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:27:59.749 [2024-07-15 13:02:30.693071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.749 [2024-07-15 13:02:30.693133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.749 [2024-07-15 13:02:30.693150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.749 [2024-07-15 13:02:30.693157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.749 [2024-07-15 13:02:30.693162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:27:59.749 [2024-07-15 13:02:30.693176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.749 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.703102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.703160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.703174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.703181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.703187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.703201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.713128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.713186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.713200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.713206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.713212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.713230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.723156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.723216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.723233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.723240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.723246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.723260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.733245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.733306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.733319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.733326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.733331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.733349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.743234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.743328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.743341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.743348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.743353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.743368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.753280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.753376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.753390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.753397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.753402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.753418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.763301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.763359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.009 [2024-07-15 13:02:30.763373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.009 [2024-07-15 13:02:30.763380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.009 [2024-07-15 13:02:30.763386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.009 [2024-07-15 13:02:30.763400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.009 qpair failed and we were unable to recover it. 00:28:00.009 [2024-07-15 13:02:30.773371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.009 [2024-07-15 13:02:30.773434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.773448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.773455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.773461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.773475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.783364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.783420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.783437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.783443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.783449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.783462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.793385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.793447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.793461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.793467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.793473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.793487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.803410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.803468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.803482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.803489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.803495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.803509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.813431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.813491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.813505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.813511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.813517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.813531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.823469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.823528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.823543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.823549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.823555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.823572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.833488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.833550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.833564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.833571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.833577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.833591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.843520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.843575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.843590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.843596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.843602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.843616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.853554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.853613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.853628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.853634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.853639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.853653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.863582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.863642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.863656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.863662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.863668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.863683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.873604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.873668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.873682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.873689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.873695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.873708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.883631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.883686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.883700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.883706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.883712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.883726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.893658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.893713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.893727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.893733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.893739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.893753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.903691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.903753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.903767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.903773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.010 [2024-07-15 13:02:30.903779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.010 [2024-07-15 13:02:30.903793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.010 qpair failed and we were unable to recover it. 00:28:00.010 [2024-07-15 13:02:30.913727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.010 [2024-07-15 13:02:30.913788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.010 [2024-07-15 13:02:30.913802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.010 [2024-07-15 13:02:30.913809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.011 [2024-07-15 13:02:30.913817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.011 [2024-07-15 13:02:30.913832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.011 qpair failed and we were unable to recover it. 00:28:00.011 [2024-07-15 13:02:30.923753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.011 [2024-07-15 13:02:30.923811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.011 [2024-07-15 13:02:30.923825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.011 [2024-07-15 13:02:30.923831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.011 [2024-07-15 13:02:30.923837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.011 [2024-07-15 13:02:30.923851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.011 qpair failed and we were unable to recover it. 00:28:00.011 [2024-07-15 13:02:30.933711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.011 [2024-07-15 13:02:30.933769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.011 [2024-07-15 13:02:30.933783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.011 [2024-07-15 13:02:30.933790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.011 [2024-07-15 13:02:30.933795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.011 [2024-07-15 13:02:30.933809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.011 qpair failed and we were unable to recover it. 00:28:00.011 [2024-07-15 13:02:30.943807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.011 [2024-07-15 13:02:30.943867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.011 [2024-07-15 13:02:30.943882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.011 [2024-07-15 13:02:30.943889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.011 [2024-07-15 13:02:30.943895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.011 [2024-07-15 13:02:30.943909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.011 qpair failed and we were unable to recover it. 00:28:00.011 [2024-07-15 13:02:30.953783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.011 [2024-07-15 13:02:30.953844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.011 [2024-07-15 13:02:30.953858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.011 [2024-07-15 13:02:30.953865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.011 [2024-07-15 13:02:30.953871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.011 [2024-07-15 13:02:30.953885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.011 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:30.963872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:30.963927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:30.963941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:30.963948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:30.963953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.269 [2024-07-15 13:02:30.963967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:30.973891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:30.973947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:30.973962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:30.973968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:30.973974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.269 [2024-07-15 13:02:30.973988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:30.983923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:30.983982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:30.983996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:30.984002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:30.984008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.269 [2024-07-15 13:02:30.984022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:30.993950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:30.994011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:30.994025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:30.994032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:30.994037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.269 [2024-07-15 13:02:30.994051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:31.003988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:31.004046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:31.004060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:31.004070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:31.004075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa948000b90 00:28:00.269 [2024-07-15 13:02:31.004089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:31.014054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:31.014219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:31.014290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:31.014316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:31.014335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa950000b90 00:28:00.269 [2024-07-15 13:02:31.014384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:31.024073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.269 [2024-07-15 13:02:31.024177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.269 [2024-07-15 13:02:31.024207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.269 [2024-07-15 13:02:31.024221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.269 [2024-07-15 13:02:31.024242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa950000b90 00:28:00.269 [2024-07-15 13:02:31.024272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:00.269 qpair failed and we were unable to recover it. 00:28:00.269 [2024-07-15 13:02:31.024434] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:00.269 A controller has encountered a failure and is being reset. 00:28:00.269 Controller properly reset. 00:28:00.269 Initializing NVMe Controllers 00:28:00.269 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:00.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:00.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:00.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:00.269 Initialization complete. Launching workers. 00:28:00.269 Starting thread on core 1 00:28:00.269 Starting thread on core 2 00:28:00.269 Starting thread on core 3 00:28:00.269 Starting thread on core 0 00:28:00.269 13:02:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:00.269 00:28:00.269 real 0m11.450s 00:28:00.269 user 0m21.562s 00:28:00.269 sys 0m4.533s 00:28:00.269 13:02:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.269 13:02:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.269 ************************************ 00:28:00.269 END TEST nvmf_target_disconnect_tc2 00:28:00.269 ************************************ 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.528 rmmod nvme_tcp 00:28:00.528 rmmod nvme_fabrics 00:28:00.528 rmmod nvme_keyring 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1876751 ']' 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1876751 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1876751 ']' 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1876751 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1876751 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1876751' 00:28:00.528 killing process with pid 1876751 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1876751 00:28:00.528 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1876751 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.787 13:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.688 13:02:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.688 00:28:02.688 real 0m19.906s 00:28:02.688 user 0m49.396s 00:28:02.688 sys 0m9.223s 00:28:02.688 13:02:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.688 13:02:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:02.688 ************************************ 00:28:02.688 END TEST nvmf_target_disconnect 00:28:02.688 ************************************ 00:28:02.947 13:02:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:02.947 13:02:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:28:02.948 13:02:33 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.948 13:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.948 13:02:33 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:28:02.948 00:28:02.948 real 21m31.047s 00:28:02.948 user 45m46.970s 00:28:02.948 sys 6m40.747s 00:28:02.948 13:02:33 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.948 13:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.948 ************************************ 00:28:02.948 END TEST nvmf_tcp 00:28:02.948 ************************************ 00:28:02.948 13:02:33 -- common/autotest_common.sh@1142 -- # return 0 00:28:02.948 13:02:33 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:28:02.948 13:02:33 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:02.948 13:02:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:02.948 13:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.948 13:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:02.948 ************************************ 00:28:02.948 START TEST spdkcli_nvmf_tcp 00:28:02.948 ************************************ 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:02.948 * Looking for test storage... 00:28:02.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1878285 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1878285 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1878285 ']' 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.948 13:02:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.207 [2024-07-15 13:02:33.934048] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:03.207 [2024-07-15 13:02:33.934091] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878285 ] 00:28:03.207 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.207 [2024-07-15 13:02:33.999752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:03.207 [2024-07-15 13:02:34.080086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.207 [2024-07-15 13:02:34.080088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.144 13:02:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:04.144 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:04.144 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:04.144 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:04.144 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:04.144 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:04.144 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:04.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:04.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:04.144 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:04.144 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:04.144 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:04.144 ' 00:28:06.679 [2024-07-15 13:02:37.355531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.059 [2024-07-15 13:02:38.639833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:10.596 [2024-07-15 13:02:41.027266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:12.560 [2024-07-15 13:02:43.081645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:13.936 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:13.936 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:13.936 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:13.936 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:13.936 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:13.936 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:13.936 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:13.936 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:13.936 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:13.936 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:13.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:13.936 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:13.936 13:02:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.503 13:02:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:14.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:14.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:14.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:14.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:14.503 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:14.503 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:14.503 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:14.503 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:14.503 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:14.503 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:14.503 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:14.503 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:14.503 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:14.503 ' 00:28:19.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:19.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:19.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:19.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:19.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:19.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:19.768 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:19.768 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:19.768 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:19.768 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:19.768 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:19.768 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:19.768 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:19.768 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1878285 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1878285 ']' 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1878285 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1878285 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1878285' 00:28:19.768 killing process with pid 1878285 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1878285 00:28:19.768 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1878285 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1878285 ']' 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1878285 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1878285 ']' 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1878285 00:28:20.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1878285) - No such process 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1878285 is not found' 00:28:20.026 Process with pid 1878285 is not found 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:20.026 00:28:20.026 real 0m17.107s 00:28:20.026 user 0m37.211s 00:28:20.026 sys 0m0.837s 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.026 13:02:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:20.026 ************************************ 00:28:20.026 END TEST spdkcli_nvmf_tcp 00:28:20.026 ************************************ 00:28:20.026 13:02:50 -- common/autotest_common.sh@1142 -- # return 0 00:28:20.026 13:02:50 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:20.026 13:02:50 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:20.026 13:02:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.026 13:02:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.026 ************************************ 00:28:20.026 START TEST nvmf_identify_passthru 00:28:20.026 ************************************ 00:28:20.026 13:02:50 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:20.285 * Looking for test storage... 00:28:20.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.285 13:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.285 13:02:51 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.285 13:02:51 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.285 13:02:51 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.285 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.285 13:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.285 13:02:51 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.285 13:02:51 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.285 13:02:51 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.285 13:02:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:20.286 13:02:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.286 13:02:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.286 13:02:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:20.286 13:02:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.286 13:02:51 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.286 13:02:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:26.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:26.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.855 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:26.856 Found net devices under 0000:86:00.0: cvl_0_0 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:26.856 Found net devices under 0000:86:00.1: cvl_0_1 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:26.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:28:26.856 00:28:26.856 --- 10.0.0.2 ping statistics --- 00:28:26.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.856 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:28:26.856 00:28:26.856 --- 10.0.0.1 ping statistics --- 00:28:26.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.856 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:26.856 13:02:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:26.856 13:02:56 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:26.856 13:02:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:26.856 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.146 13:03:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:30.146 13:03:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:30.146 13:03:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:30.146 13:03:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:30.405 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1885661 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.600 13:03:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1885661 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1885661 ']' 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.600 13:03:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:34.600 [2024-07-15 13:03:05.297437] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:34.600 [2024-07-15 13:03:05.297481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.600 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.600 [2024-07-15 13:03:05.366803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.600 [2024-07-15 13:03:05.446117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.600 [2024-07-15 13:03:05.446152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.600 [2024-07-15 13:03:05.446158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.600 [2024-07-15 13:03:05.446164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.600 [2024-07-15 13:03:05.446169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.600 [2024-07-15 13:03:05.446214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.600 [2024-07-15 13:03:05.446322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.600 [2024-07-15 13:03:05.446426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.600 [2024-07-15 13:03:05.446427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.170 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.170 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:35.170 13:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:35.170 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.170 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:35.170 INFO: Log level set to 20 00:28:35.170 INFO: Requests: 00:28:35.170 { 00:28:35.170 "jsonrpc": "2.0", 00:28:35.170 "method": "nvmf_set_config", 00:28:35.170 "id": 1, 00:28:35.170 "params": { 00:28:35.170 "admin_cmd_passthru": { 00:28:35.170 "identify_ctrlr": true 00:28:35.170 } 00:28:35.170 } 00:28:35.170 } 00:28:35.170 00:28:35.429 INFO: response: 00:28:35.429 { 00:28:35.429 "jsonrpc": "2.0", 00:28:35.429 "id": 1, 00:28:35.429 "result": true 00:28:35.429 } 00:28:35.429 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.429 13:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:35.429 INFO: Setting log level to 20 00:28:35.429 INFO: Setting log level to 20 00:28:35.429 INFO: Log level set to 20 00:28:35.429 INFO: Log level set to 20 00:28:35.429 INFO: Requests: 00:28:35.429 { 00:28:35.429 "jsonrpc": "2.0", 00:28:35.429 "method": "framework_start_init", 00:28:35.429 "id": 1 00:28:35.429 } 00:28:35.429 00:28:35.429 INFO: Requests: 00:28:35.429 { 00:28:35.429 "jsonrpc": "2.0", 00:28:35.429 "method": "framework_start_init", 00:28:35.429 "id": 1 00:28:35.429 } 00:28:35.429 00:28:35.429 [2024-07-15 13:03:06.210145] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:35.429 INFO: response: 00:28:35.429 { 00:28:35.429 "jsonrpc": "2.0", 00:28:35.429 "id": 1, 00:28:35.429 "result": true 00:28:35.429 } 00:28:35.429 00:28:35.429 INFO: response: 00:28:35.429 { 00:28:35.429 "jsonrpc": "2.0", 00:28:35.429 "id": 1, 00:28:35.429 "result": true 00:28:35.429 } 00:28:35.429 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.429 13:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:35.429 INFO: Setting log level to 40 00:28:35.429 INFO: Setting log level to 40 00:28:35.429 INFO: Setting log level to 40 00:28:35.429 [2024-07-15 13:03:06.223684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.429 13:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:35.429 13:03:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.429 13:03:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.757 Nvme0n1 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.757 [2024-07-15 13:03:09.120884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.757 [ 00:28:38.757 { 00:28:38.757 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:38.757 "subtype": "Discovery", 00:28:38.757 "listen_addresses": [], 00:28:38.757 "allow_any_host": true, 00:28:38.757 "hosts": [] 00:28:38.757 }, 00:28:38.757 { 00:28:38.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.757 "subtype": "NVMe", 00:28:38.757 "listen_addresses": [ 00:28:38.757 { 00:28:38.757 "trtype": "TCP", 00:28:38.757 "adrfam": "IPv4", 00:28:38.757 "traddr": "10.0.0.2", 00:28:38.757 "trsvcid": "4420" 00:28:38.757 } 00:28:38.757 ], 00:28:38.757 "allow_any_host": true, 00:28:38.757 "hosts": [], 00:28:38.757 "serial_number": "SPDK00000000000001", 00:28:38.757 "model_number": "SPDK bdev Controller", 00:28:38.757 "max_namespaces": 1, 00:28:38.757 "min_cntlid": 1, 00:28:38.757 "max_cntlid": 65519, 00:28:38.757 "namespaces": [ 00:28:38.757 { 00:28:38.757 "nsid": 1, 00:28:38.757 "bdev_name": "Nvme0n1", 00:28:38.757 "name": "Nvme0n1", 00:28:38.757 "nguid": "9BF3F18C620446E69BB7499E864DD333", 00:28:38.757 "uuid": "9bf3f18c-6204-46e6-9bb7-499e864dd333" 00:28:38.757 } 00:28:38.757 ] 00:28:38.757 } 00:28:38.757 ] 00:28:38.757 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:38.757 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.757 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:38.758 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:38.758 13:03:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:38.758 rmmod nvme_tcp 00:28:38.758 rmmod nvme_fabrics 00:28:38.758 rmmod nvme_keyring 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1885661 ']' 00:28:38.758 13:03:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1885661 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1885661 ']' 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1885661 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.758 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1885661 00:28:39.016 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:39.016 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:39.016 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1885661' 00:28:39.016 killing process with pid 1885661 00:28:39.016 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1885661 00:28:39.016 13:03:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1885661 00:28:40.394 13:03:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.394 13:03:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.394 13:03:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.394 13:03:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.394 13:03:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.394 13:03:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.394 13:03:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:40.394 13:03:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.295 13:03:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.296 00:28:42.296 real 0m22.310s 00:28:42.296 user 0m30.383s 00:28:42.296 sys 0m5.222s 00:28:42.296 13:03:13 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.296 13:03:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:42.296 ************************************ 00:28:42.296 END TEST nvmf_identify_passthru 00:28:42.296 ************************************ 00:28:42.554 13:03:13 -- common/autotest_common.sh@1142 -- # return 0 00:28:42.554 13:03:13 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:42.554 13:03:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:42.554 13:03:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.554 13:03:13 -- common/autotest_common.sh@10 -- # set +x 00:28:42.554 ************************************ 00:28:42.554 START TEST nvmf_dif 00:28:42.554 ************************************ 00:28:42.554 13:03:13 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:42.554 * Looking for test storage... 00:28:42.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.554 13:03:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.554 13:03:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.554 13:03:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.554 13:03:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.554 13:03:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.554 13:03:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.554 13:03:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.554 13:03:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:42.554 13:03:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:42.554 13:03:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:42.554 13:03:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:42.554 13:03:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:42.554 13:03:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:42.554 13:03:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.554 13:03:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:42.554 13:03:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:42.554 13:03:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:42.554 13:03:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:49.121 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:49.121 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:49.121 Found net devices under 0000:86:00.0: cvl_0_0 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:49.121 Found net devices under 0000:86:00.1: cvl_0_1 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:49.121 13:03:18 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.122 13:03:18 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.122 13:03:18 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.122 13:03:18 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.122 13:03:18 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:49.122 13:03:18 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:49.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:28:49.122 00:28:49.122 --- 10.0.0.2 ping statistics --- 00:28:49.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.122 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:28:49.122 00:28:49.122 --- 10.0.0.1 ping statistics --- 00:28:49.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.122 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:49.122 13:03:19 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.028 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:51.028 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:51.028 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:51.028 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:51.029 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:51.029 13:03:21 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:51.029 13:03:21 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1891532 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1891532 00:28:51.029 13:03:21 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1891532 ']' 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.029 13:03:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:51.029 [2024-07-15 13:03:21.973478] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:51.029 [2024-07-15 13:03:21.973524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.288 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.288 [2024-07-15 13:03:22.042540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.288 [2024-07-15 13:03:22.115137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.288 [2024-07-15 13:03:22.115180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.288 [2024-07-15 13:03:22.115186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.288 [2024-07-15 13:03:22.115192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.288 [2024-07-15 13:03:22.115197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.288 [2024-07-15 13:03:22.115215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.855 13:03:22 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.855 13:03:22 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:51.855 13:03:22 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.855 13:03:22 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:51.855 13:03:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 13:03:22 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.114 13:03:22 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:52.114 13:03:22 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:52.114 13:03:22 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.114 13:03:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 [2024-07-15 13:03:22.826093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.114 13:03:22 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.114 13:03:22 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:52.114 13:03:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:52.114 13:03:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.114 13:03:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 ************************************ 00:28:52.114 START TEST fio_dif_1_default 00:28:52.114 ************************************ 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 bdev_null0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:52.114 [2024-07-15 13:03:22.902415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.114 { 00:28:52.114 "params": { 00:28:52.114 "name": "Nvme$subsystem", 00:28:52.114 "trtype": "$TEST_TRANSPORT", 00:28:52.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.114 "adrfam": "ipv4", 00:28:52.114 "trsvcid": "$NVMF_PORT", 00:28:52.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.114 "hdgst": ${hdgst:-false}, 00:28:52.114 "ddgst": ${ddgst:-false} 00:28:52.114 }, 00:28:52.114 "method": "bdev_nvme_attach_controller" 00:28:52.114 } 00:28:52.114 EOF 00:28:52.114 )") 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:52.114 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:52.115 "params": { 00:28:52.115 "name": "Nvme0", 00:28:52.115 "trtype": "tcp", 00:28:52.115 "traddr": "10.0.0.2", 00:28:52.115 "adrfam": "ipv4", 00:28:52.115 "trsvcid": "4420", 00:28:52.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.115 "hdgst": false, 00:28:52.115 "ddgst": false 00:28:52.115 }, 00:28:52.115 "method": "bdev_nvme_attach_controller" 00:28:52.115 }' 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:52.115 13:03:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.374 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:52.374 fio-3.35 00:28:52.374 Starting 1 thread 00:28:52.374 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.582 00:29:04.583 filename0: (groupid=0, jobs=1): err= 0: pid=1892115: Mon Jul 15 13:03:33 2024 00:29:04.583 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:29:04.583 slat (nsec): min=5592, max=40131, avg=6124.94, stdev=1131.75 00:29:04.583 clat (usec): min=479, max=45550, avg=21039.34, stdev=20452.40 00:29:04.583 lat (usec): min=485, max=45581, avg=21045.46, stdev=20452.35 00:29:04.583 clat percentiles (usec): 00:29:04.583 | 1.00th=[ 494], 5.00th=[ 506], 10.00th=[ 523], 20.00th=[ 537], 00:29:04.583 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[41157], 60.00th=[41157], 00:29:04.583 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:04.583 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:29:04.583 | 99.99th=[45351] 00:29:04.583 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:29:04.583 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:29:04.583 lat (usec) : 500=2.68%, 750=47.21% 00:29:04.583 lat (msec) : 50=50.11% 00:29:04.583 cpu : usr=94.97%, sys=4.78%, ctx=13, majf=0, minf=255 00:29:04.583 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:04.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.583 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.583 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:04.583 00:29:04.583 Run status group 0 (all jobs): 00:29:04.583 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10002-10002msec 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 00:29:04.583 real 0m11.270s 00:29:04.583 user 0m16.366s 00:29:04.583 sys 0m0.824s 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 ************************************ 00:29:04.583 END TEST fio_dif_1_default 00:29:04.583 ************************************ 00:29:04.583 13:03:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:04.583 13:03:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:04.583 13:03:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:04.583 13:03:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 ************************************ 00:29:04.583 START TEST fio_dif_1_multi_subsystems 00:29:04.583 ************************************ 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 bdev_null0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 [2024-07-15 13:03:34.236471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 bdev_null1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:04.583 { 00:29:04.583 "params": { 00:29:04.583 "name": "Nvme$subsystem", 00:29:04.583 "trtype": "$TEST_TRANSPORT", 00:29:04.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.583 "adrfam": "ipv4", 00:29:04.583 "trsvcid": "$NVMF_PORT", 00:29:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.583 "hdgst": ${hdgst:-false}, 00:29:04.583 "ddgst": ${ddgst:-false} 00:29:04.583 }, 00:29:04.583 "method": "bdev_nvme_attach_controller" 00:29:04.583 } 00:29:04.583 EOF 00:29:04.583 )") 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:04.583 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme0", 00:29:04.584 "trtype": "tcp", 00:29:04.584 "traddr": "10.0.0.2", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "4420", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.584 "hdgst": false, 00:29:04.584 "ddgst": false 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 },{ 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme1", 00:29:04.584 "trtype": "tcp", 00:29:04.584 "traddr": "10.0.0.2", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "4420", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.584 "hdgst": false, 00:29:04.584 "ddgst": false 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 }' 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:04.584 13:03:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:04.584 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:04.584 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:04.584 fio-3.35 00:29:04.584 Starting 2 threads 00:29:04.584 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.634 00:29:14.634 filename0: (groupid=0, jobs=1): err= 0: pid=1893986: Mon Jul 15 13:03:45 2024 00:29:14.634 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:29:14.634 slat (nsec): min=5999, max=24739, avg=7770.79, stdev=2720.94 00:29:14.634 clat (usec): min=40769, max=42034, avg=41018.44, stdev=202.86 00:29:14.634 lat (usec): min=40775, max=42046, avg=41026.21, stdev=203.06 00:29:14.634 clat percentiles (usec): 00:29:14.634 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:14.634 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:14.634 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:14.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:14.634 | 99.99th=[42206] 00:29:14.634 bw ( KiB/s): min= 384, max= 416, per=49.77%, avg=388.80, stdev=11.72, samples=20 00:29:14.634 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:29:14.634 lat (msec) : 50=100.00% 00:29:14.634 cpu : usr=97.60%, sys=2.13%, ctx=13, majf=0, minf=156 00:29:14.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:14.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.634 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:14.634 filename1: (groupid=0, jobs=1): err= 0: pid=1893987: Mon Jul 15 13:03:45 2024 00:29:14.634 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:29:14.634 slat (nsec): min=5998, max=27206, avg=7677.43, stdev=2580.42 00:29:14.634 clat (usec): min=40806, max=41990, avg=41005.37, stdev=166.12 00:29:14.634 lat (usec): min=40812, max=42002, avg=41013.05, stdev=166.33 00:29:14.634 clat percentiles (usec): 00:29:14.634 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:14.634 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:14.634 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:14.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:14.634 | 99.99th=[42206] 00:29:14.634 bw ( KiB/s): min= 384, max= 416, per=49.77%, avg=388.80, stdev=11.72, samples=20 00:29:14.634 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:29:14.634 lat (msec) : 50=100.00% 00:29:14.634 cpu : usr=97.65%, sys=2.10%, ctx=12, majf=0, minf=95 00:29:14.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:14.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.634 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:14.634 00:29:14.634 Run status group 0 (all jobs): 00:29:14.634 READ: bw=780KiB/s (798kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10011-10015msec 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.634 00:29:14.634 real 0m11.209s 00:29:14.634 user 0m26.316s 00:29:14.634 sys 0m0.704s 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.634 13:03:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:14.634 ************************************ 00:29:14.634 END TEST fio_dif_1_multi_subsystems 00:29:14.634 ************************************ 00:29:14.634 13:03:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:14.634 13:03:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:14.634 13:03:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:14.634 13:03:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.634 13:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:14.634 ************************************ 00:29:14.634 START TEST fio_dif_rand_params 00:29:14.634 ************************************ 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:14.634 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:14.635 bdev_null0 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:14.635 [2024-07-15 13:03:45.522246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.635 { 00:29:14.635 "params": { 00:29:14.635 "name": "Nvme$subsystem", 00:29:14.635 "trtype": "$TEST_TRANSPORT", 00:29:14.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.635 "adrfam": "ipv4", 00:29:14.635 "trsvcid": "$NVMF_PORT", 00:29:14.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.635 "hdgst": ${hdgst:-false}, 00:29:14.635 "ddgst": ${ddgst:-false} 00:29:14.635 }, 00:29:14.635 "method": "bdev_nvme_attach_controller" 00:29:14.635 } 00:29:14.635 EOF 00:29:14.635 )") 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:14.635 "params": { 00:29:14.635 "name": "Nvme0", 00:29:14.635 "trtype": "tcp", 00:29:14.635 "traddr": "10.0.0.2", 00:29:14.635 "adrfam": "ipv4", 00:29:14.635 "trsvcid": "4420", 00:29:14.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.635 "hdgst": false, 00:29:14.635 "ddgst": false 00:29:14.635 }, 00:29:14.635 "method": "bdev_nvme_attach_controller" 00:29:14.635 }' 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:14.635 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:14.907 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:14.907 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:14.907 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:14.907 13:03:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.166 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:15.166 ... 00:29:15.166 fio-3.35 00:29:15.166 Starting 3 threads 00:29:15.166 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.730 00:29:21.730 filename0: (groupid=0, jobs=1): err= 0: pid=1895831: Mon Jul 15 13:03:51 2024 00:29:21.730 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(131MiB/5005msec) 00:29:21.730 slat (nsec): min=6379, max=37802, avg=11767.82, stdev=4083.25 00:29:21.730 clat (usec): min=3720, max=62268, avg=14366.10, stdev=15153.02 00:29:21.730 lat (usec): min=3728, max=62286, avg=14377.86, stdev=15154.27 00:29:21.730 clat percentiles (usec): 00:29:21.730 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 5276], 20.00th=[ 6194], 00:29:21.730 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 8094], 60.00th=[ 8717], 00:29:21.730 | 70.00th=[ 9503], 80.00th=[11076], 90.00th=[46400], 95.00th=[49021], 00:29:21.730 | 99.00th=[53216], 99.50th=[53740], 99.90th=[60031], 99.95th=[62129], 00:29:21.730 | 99.99th=[62129] 00:29:21.730 bw ( KiB/s): min= 7680, max=46848, per=33.89%, avg=26649.60, stdev=16001.97, samples=10 00:29:21.730 iops : min= 60, max= 366, avg=208.20, stdev=125.02, samples=10 00:29:21.730 lat (msec) : 4=1.25%, 10=73.47%, 20=7.57%, 50=14.66%, 100=3.07% 00:29:21.730 cpu : usr=96.42%, sys=3.24%, ctx=7, majf=0, minf=32 00:29:21.730 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.730 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.730 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:21.730 filename0: (groupid=0, jobs=1): err= 0: pid=1895832: Mon Jul 15 13:03:51 2024 00:29:21.730 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(141MiB/5005msec) 00:29:21.730 slat (nsec): min=6322, max=45704, avg=12124.10, stdev=5733.02 00:29:21.730 clat (usec): min=3571, max=91286, avg=13295.05, stdev=14271.45 00:29:21.730 lat (usec): min=3578, max=91298, avg=13307.17, stdev=14273.96 00:29:21.730 clat percentiles (usec): 00:29:21.730 | 1.00th=[ 3884], 5.00th=[ 4359], 10.00th=[ 5276], 20.00th=[ 5932], 00:29:21.730 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7898], 60.00th=[ 8717], 00:29:21.730 | 70.00th=[ 9503], 80.00th=[10814], 90.00th=[43779], 95.00th=[47449], 00:29:21.730 | 99.00th=[50594], 99.50th=[52691], 99.90th=[91751], 99.95th=[91751], 00:29:21.730 | 99.99th=[91751] 00:29:21.730 bw ( KiB/s): min= 8192, max=49664, per=36.63%, avg=28800.00, stdev=18195.12, samples=10 00:29:21.730 iops : min= 64, max= 388, avg=225.00, stdev=142.15, samples=10 00:29:21.730 lat (msec) : 4=1.77%, 10=73.14%, 20=10.02%, 50=13.74%, 100=1.33% 00:29:21.730 cpu : usr=96.60%, sys=3.04%, ctx=20, majf=0, minf=202 00:29:21.730 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.730 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.730 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:21.730 filename0: (groupid=0, jobs=1): err= 0: pid=1895833: Mon Jul 15 13:03:51 2024 00:29:21.730 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(116MiB/5043msec) 00:29:21.730 slat (nsec): min=6343, max=39307, avg=14219.54, stdev=7631.21 00:29:21.730 clat (usec): min=4316, max=88964, avg=16274.12, stdev=15898.85 00:29:21.730 lat (usec): min=4323, max=88991, avg=16288.34, stdev=15903.63 00:29:21.730 clat percentiles (usec): 00:29:21.730 | 1.00th=[ 5014], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6915], 00:29:21.730 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:29:21.730 | 70.00th=[10290], 80.00th=[38536], 90.00th=[47973], 95.00th=[48497], 00:29:21.730 | 99.00th=[50594], 99.50th=[51643], 99.90th=[88605], 99.95th=[88605], 00:29:21.730 | 99.99th=[88605] 00:29:21.730 bw ( KiB/s): min= 8192, max=38912, per=30.09%, avg=23661.70, stdev=13254.81, samples=10 00:29:21.730 iops : min= 64, max= 304, avg=184.80, stdev=103.49, samples=10 00:29:21.730 lat (msec) : 10=67.49%, 20=11.88%, 50=18.68%, 100=1.94% 00:29:21.730 cpu : usr=96.39%, sys=3.25%, ctx=10, majf=0, minf=64 00:29:21.730 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.730 issued rwts: total=926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.730 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:21.730 00:29:21.730 Run status group 0 (all jobs): 00:29:21.730 READ: bw=76.8MiB/s (80.5MB/s), 23.0MiB/s-28.2MiB/s (24.1MB/s-29.5MB/s), io=387MiB (406MB), run=5005-5043msec 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:21.730 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 bdev_null0 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 [2024-07-15 13:03:51.724862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 bdev_null1 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 bdev_null2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.731 { 00:29:21.731 "params": { 00:29:21.731 "name": "Nvme$subsystem", 00:29:21.731 "trtype": "$TEST_TRANSPORT", 00:29:21.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.731 "adrfam": "ipv4", 00:29:21.731 "trsvcid": "$NVMF_PORT", 00:29:21.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.731 "hdgst": ${hdgst:-false}, 00:29:21.731 "ddgst": ${ddgst:-false} 00:29:21.731 }, 00:29:21.731 "method": "bdev_nvme_attach_controller" 00:29:21.731 } 00:29:21.731 EOF 00:29:21.731 )") 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.731 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.732 { 00:29:21.732 "params": { 00:29:21.732 "name": "Nvme$subsystem", 00:29:21.732 "trtype": "$TEST_TRANSPORT", 00:29:21.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.732 "adrfam": "ipv4", 00:29:21.732 "trsvcid": "$NVMF_PORT", 00:29:21.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.732 "hdgst": ${hdgst:-false}, 00:29:21.732 "ddgst": ${ddgst:-false} 00:29:21.732 }, 00:29:21.732 "method": "bdev_nvme_attach_controller" 00:29:21.732 } 00:29:21.732 EOF 00:29:21.732 )") 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.732 { 00:29:21.732 "params": { 00:29:21.732 "name": "Nvme$subsystem", 00:29:21.732 "trtype": "$TEST_TRANSPORT", 00:29:21.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.732 "adrfam": "ipv4", 00:29:21.732 "trsvcid": "$NVMF_PORT", 00:29:21.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.732 "hdgst": ${hdgst:-false}, 00:29:21.732 "ddgst": ${ddgst:-false} 00:29:21.732 }, 00:29:21.732 "method": "bdev_nvme_attach_controller" 00:29:21.732 } 00:29:21.732 EOF 00:29:21.732 )") 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:21.732 "params": { 00:29:21.732 "name": "Nvme0", 00:29:21.732 "trtype": "tcp", 00:29:21.732 "traddr": "10.0.0.2", 00:29:21.732 "adrfam": "ipv4", 00:29:21.732 "trsvcid": "4420", 00:29:21.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:21.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:21.732 "hdgst": false, 00:29:21.732 "ddgst": false 00:29:21.732 }, 00:29:21.732 "method": "bdev_nvme_attach_controller" 00:29:21.732 },{ 00:29:21.732 "params": { 00:29:21.732 "name": "Nvme1", 00:29:21.732 "trtype": "tcp", 00:29:21.732 "traddr": "10.0.0.2", 00:29:21.732 "adrfam": "ipv4", 00:29:21.732 "trsvcid": "4420", 00:29:21.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:21.732 "hdgst": false, 00:29:21.732 "ddgst": false 00:29:21.732 }, 00:29:21.732 "method": "bdev_nvme_attach_controller" 00:29:21.732 },{ 00:29:21.732 "params": { 00:29:21.732 "name": "Nvme2", 00:29:21.732 "trtype": "tcp", 00:29:21.732 "traddr": "10.0.0.2", 00:29:21.732 "adrfam": "ipv4", 00:29:21.732 "trsvcid": "4420", 00:29:21.732 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:21.732 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:21.732 "hdgst": false, 00:29:21.732 "ddgst": false 00:29:21.732 }, 00:29:21.732 "method": "bdev_nvme_attach_controller" 00:29:21.732 }' 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:21.732 13:03:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:21.732 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:21.732 ... 00:29:21.732 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:21.732 ... 00:29:21.732 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:21.732 ... 00:29:21.732 fio-3.35 00:29:21.732 Starting 24 threads 00:29:21.732 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.916 00:29:33.916 filename0: (groupid=0, jobs=1): err= 0: pid=1897097: Mon Jul 15 13:04:03 2024 00:29:33.916 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10017msec) 00:29:33.916 slat (nsec): min=6925, max=79308, avg=30174.77, stdev=19137.44 00:29:33.916 clat (usec): min=24828, max=51142, avg=27718.46, stdev=1278.90 00:29:33.916 lat (usec): min=24837, max=51173, avg=27748.63, stdev=1277.81 00:29:33.916 clat percentiles (usec): 00:29:33.916 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:33.916 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.916 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.916 | 99.00th=[28443], 99.50th=[28705], 99.90th=[51119], 99.95th=[51119], 00:29:33.916 | 99.99th=[51119] 00:29:33.916 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:29:33.916 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:29:33.916 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.916 cpu : usr=98.72%, sys=0.89%, ctx=13, majf=0, minf=28 00:29:33.916 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.916 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.916 filename0: (groupid=0, jobs=1): err= 0: pid=1897098: Mon Jul 15 13:04:03 2024 00:29:33.916 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.4MiB/10005msec) 00:29:33.916 slat (nsec): min=6912, max=75560, avg=26004.67, stdev=8934.66 00:29:33.916 clat (usec): min=4570, max=64070, avg=27719.16, stdev=2296.20 00:29:33.916 lat (usec): min=4577, max=64102, avg=27745.16, stdev=2296.69 00:29:33.916 clat percentiles (usec): 00:29:33.916 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.916 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.916 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.916 | 99.00th=[28705], 99.50th=[32113], 99.90th=[64226], 99.95th=[64226], 00:29:33.916 | 99.99th=[64226] 00:29:33.916 bw ( KiB/s): min= 2032, max= 2304, per=4.14%, avg=2276.21, stdev=71.52, samples=19 00:29:33.916 iops : min= 508, max= 576, avg=569.05, stdev=17.88, samples=19 00:29:33.916 lat (msec) : 10=0.24%, 50=99.48%, 100=0.28% 00:29:33.916 cpu : usr=98.65%, sys=0.96%, ctx=17, majf=0, minf=28 00:29:33.916 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:33.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.916 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.916 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.916 filename0: (groupid=0, jobs=1): err= 0: pid=1897099: Mon Jul 15 13:04:03 2024 00:29:33.916 read: IOPS=570, BW=2283KiB/s (2337kB/s)(22.3MiB/10010msec) 00:29:33.916 slat (nsec): min=6332, max=52678, avg=26479.73, stdev=7898.33 00:29:33.916 clat (usec): min=20643, max=72268, avg=27811.07, stdev=2401.30 00:29:33.916 lat (usec): min=20657, max=72285, avg=27837.55, stdev=2400.28 00:29:33.916 clat percentiles (usec): 00:29:33.916 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.916 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.916 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.916 | 99.00th=[28705], 99.50th=[28967], 99.90th=[71828], 99.95th=[71828], 00:29:33.916 | 99.99th=[71828] 00:29:33.916 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2277.05, stdev=68.52, samples=19 00:29:33.916 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:29:33.916 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.916 cpu : usr=98.57%, sys=1.03%, ctx=24, majf=0, minf=27 00:29:33.916 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:33.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename0: (groupid=0, jobs=1): err= 0: pid=1897100: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:29:33.917 slat (nsec): min=7527, max=54347, avg=26213.76, stdev=7938.63 00:29:33.917 clat (usec): min=20304, max=66056, avg=27776.40, stdev=2091.22 00:29:33.917 lat (usec): min=20314, max=66077, avg=27802.61, stdev=2090.59 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.917 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.917 | 99.00th=[28705], 99.50th=[28967], 99.90th=[65799], 99.95th=[65799], 00:29:33.917 | 99.99th=[65799] 00:29:33.917 bw ( KiB/s): min= 2048, max= 2320, per=4.14%, avg=2277.05, stdev=68.73, samples=19 00:29:33.917 iops : min= 512, max= 580, avg=569.26, stdev=17.18, samples=19 00:29:33.917 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.917 cpu : usr=98.83%, sys=0.78%, ctx=12, majf=0, minf=25 00:29:33.917 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename0: (groupid=0, jobs=1): err= 0: pid=1897101: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10005msec) 00:29:33.917 slat (nsec): min=6953, max=78256, avg=23535.01, stdev=16069.52 00:29:33.917 clat (usec): min=13707, max=44767, avg=27753.61, stdev=1382.15 00:29:33.917 lat (usec): min=13716, max=44786, avg=27777.15, stdev=1381.19 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:33.917 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.917 | 99.00th=[28967], 99.50th=[40633], 99.90th=[41681], 99.95th=[43779], 00:29:33.917 | 99.99th=[44827] 00:29:33.917 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=44.84, samples=20 00:29:33.917 iops : min= 544, max= 576, avg=571.20, stdev=11.21, samples=20 00:29:33.917 lat (msec) : 20=0.31%, 50=99.69% 00:29:33.917 cpu : usr=98.69%, sys=0.92%, ctx=17, majf=0, minf=32 00:29:33.917 IO depths : 1=5.9%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename0: (groupid=0, jobs=1): err= 0: pid=1897102: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10005msec) 00:29:33.917 slat (nsec): min=7081, max=57885, avg=20905.59, stdev=8558.24 00:29:33.917 clat (usec): min=20123, max=40869, avg=27752.89, stdev=839.90 00:29:33.917 lat (usec): min=20137, max=40893, avg=27773.79, stdev=839.81 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.917 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.917 | 99.00th=[28967], 99.50th=[28967], 99.90th=[40633], 99.95th=[40633], 00:29:33.917 | 99.99th=[40633] 00:29:33.917 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:29:33.917 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:29:33.917 lat (msec) : 50=100.00% 00:29:33.917 cpu : usr=98.79%, sys=0.83%, ctx=17, majf=0, minf=27 00:29:33.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename0: (groupid=0, jobs=1): err= 0: pid=1897103: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10021msec) 00:29:33.917 slat (nsec): min=7445, max=48374, avg=16286.72, stdev=7514.52 00:29:33.917 clat (usec): min=20551, max=35444, avg=27785.23, stdev=692.11 00:29:33.917 lat (usec): min=20563, max=35461, avg=27801.51, stdev=691.33 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:33.917 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.917 | 99.00th=[28705], 99.50th=[28967], 99.90th=[35390], 99.95th=[35390], 00:29:33.917 | 99.99th=[35390] 00:29:33.917 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:29:33.917 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:29:33.917 lat (msec) : 50=100.00% 00:29:33.917 cpu : usr=98.42%, sys=1.19%, ctx=20, majf=0, minf=33 00:29:33.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename0: (groupid=0, jobs=1): err= 0: pid=1897104: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10006msec) 00:29:33.917 slat (nsec): min=6326, max=53316, avg=26538.69, stdev=7644.00 00:29:33.917 clat (usec): min=21674, max=74523, avg=27788.08, stdev=2295.66 00:29:33.917 lat (usec): min=21696, max=74541, avg=27814.62, stdev=2294.79 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.917 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.917 | 99.00th=[28705], 99.50th=[28967], 99.90th=[69731], 99.95th=[69731], 00:29:33.917 | 99.99th=[74974] 00:29:33.917 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2277.05, stdev=68.52, samples=19 00:29:33.917 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:29:33.917 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.917 cpu : usr=98.83%, sys=0.79%, ctx=14, majf=0, minf=23 00:29:33.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename1: (groupid=0, jobs=1): err= 0: pid=1897105: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10020msec) 00:29:33.917 slat (nsec): min=7572, max=52265, avg=23561.88, stdev=8148.81 00:29:33.917 clat (usec): min=19301, max=36097, avg=27726.83, stdev=683.25 00:29:33.917 lat (usec): min=19310, max=36124, avg=27750.39, stdev=682.44 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.917 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.917 | 99.00th=[28705], 99.50th=[28967], 99.90th=[35390], 99.95th=[35390], 00:29:33.917 | 99.99th=[35914] 00:29:33.917 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:29:33.917 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:29:33.917 lat (msec) : 20=0.03%, 50=99.97% 00:29:33.917 cpu : usr=98.42%, sys=1.19%, ctx=23, majf=0, minf=29 00:29:33.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename1: (groupid=0, jobs=1): err= 0: pid=1897106: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=574, BW=2296KiB/s (2351kB/s)(22.5MiB/10017msec) 00:29:33.917 slat (nsec): min=6895, max=80171, avg=27526.14, stdev=18206.71 00:29:33.917 clat (usec): min=14393, max=35359, avg=27609.49, stdev=1108.19 00:29:33.917 lat (usec): min=14401, max=35367, avg=27637.02, stdev=1108.83 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[22152], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:33.917 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.917 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.917 | 99.00th=[28705], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:29:33.917 | 99.99th=[35390] 00:29:33.917 bw ( KiB/s): min= 2176, max= 2352, per=4.17%, avg=2293.60, stdev=41.62, samples=20 00:29:33.917 iops : min= 544, max= 588, avg=573.40, stdev=10.40, samples=20 00:29:33.917 lat (msec) : 20=0.23%, 50=99.77% 00:29:33.917 cpu : usr=98.73%, sys=0.89%, ctx=11, majf=0, minf=28 00:29:33.917 IO depths : 1=5.9%, 2=11.9%, 4=24.5%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.917 filename1: (groupid=0, jobs=1): err= 0: pid=1897107: Mon Jul 15 13:04:03 2024 00:29:33.917 read: IOPS=587, BW=2348KiB/s (2404kB/s)(22.9MiB/10003msec) 00:29:33.917 slat (usec): min=6, max=909, avg=18.12, stdev=19.28 00:29:33.917 clat (usec): min=1469, max=35047, avg=27112.91, stdev=4009.93 00:29:33.917 lat (usec): min=1478, max=35055, avg=27131.04, stdev=4010.08 00:29:33.917 clat percentiles (usec): 00:29:33.917 | 1.00th=[ 2311], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:29:33.917 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:33.917 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:33.917 | 99.00th=[28967], 99.50th=[30540], 99.90th=[34866], 99.95th=[34866], 00:29:33.917 | 99.99th=[34866] 00:29:33.917 bw ( KiB/s): min= 2176, max= 3328, per=4.27%, avg=2351.16, stdev=238.36, samples=19 00:29:33.917 iops : min= 544, max= 832, avg=587.79, stdev=59.59, samples=19 00:29:33.917 lat (msec) : 2=0.43%, 4=1.75%, 10=0.27%, 20=0.27%, 50=97.28% 00:29:33.917 cpu : usr=98.62%, sys=1.00%, ctx=16, majf=0, minf=44 00:29:33.917 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:33.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.917 issued rwts: total=5872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename1: (groupid=0, jobs=1): err= 0: pid=1897108: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10017msec) 00:29:33.918 slat (nsec): min=7062, max=79062, avg=31509.23, stdev=19034.93 00:29:33.918 clat (usec): min=24840, max=51090, avg=27693.19, stdev=1274.33 00:29:33.918 lat (usec): min=24852, max=51124, avg=27724.70, stdev=1273.82 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:33.918 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.918 | 99.00th=[28443], 99.50th=[28705], 99.90th=[51119], 99.95th=[51119], 00:29:33.918 | 99.99th=[51119] 00:29:33.918 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:29:33.918 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:29:33.918 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.918 cpu : usr=98.77%, sys=0.84%, ctx=13, majf=0, minf=34 00:29:33.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename1: (groupid=0, jobs=1): err= 0: pid=1897109: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10005msec) 00:29:33.918 slat (nsec): min=7240, max=78140, avg=26236.95, stdev=16487.64 00:29:33.918 clat (usec): min=20127, max=40721, avg=27715.21, stdev=851.32 00:29:33.918 lat (usec): min=20142, max=40743, avg=27741.45, stdev=850.06 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:33.918 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.918 | 99.00th=[28967], 99.50th=[28967], 99.90th=[40633], 99.95th=[40633], 00:29:33.918 | 99.99th=[40633] 00:29:33.918 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:29:33.918 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:29:33.918 lat (msec) : 50=100.00% 00:29:33.918 cpu : usr=98.76%, sys=0.84%, ctx=66, majf=0, minf=29 00:29:33.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename1: (groupid=0, jobs=1): err= 0: pid=1897110: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10017msec) 00:29:33.918 slat (nsec): min=6891, max=79213, avg=20508.04, stdev=17290.46 00:29:33.918 clat (usec): min=20071, max=54457, avg=27788.38, stdev=1544.55 00:29:33.918 lat (usec): min=20079, max=54473, avg=27808.88, stdev=1544.15 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[21103], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:33.918 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:33.918 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.918 | 99.00th=[34341], 99.50th=[34866], 99.90th=[46924], 99.95th=[46924], 00:29:33.918 | 99.99th=[54264] 00:29:33.918 bw ( KiB/s): min= 2176, max= 2336, per=4.16%, avg=2287.20, stdev=45.10, samples=20 00:29:33.918 iops : min= 544, max= 584, avg=571.80, stdev=11.27, samples=20 00:29:33.918 lat (msec) : 50=99.98%, 100=0.02% 00:29:33.918 cpu : usr=98.80%, sys=0.81%, ctx=5, majf=0, minf=24 00:29:33.918 IO depths : 1=2.4%, 2=5.1%, 4=11.8%, 8=67.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=91.6%, 8=5.7%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename1: (groupid=0, jobs=1): err= 0: pid=1897111: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10020msec) 00:29:33.918 slat (nsec): min=5040, max=68216, avg=25692.52, stdev=8397.59 00:29:33.918 clat (usec): min=17410, max=35460, avg=27702.37, stdev=839.15 00:29:33.918 lat (usec): min=17417, max=35474, avg=27728.06, stdev=838.69 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[26084], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.918 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.918 | 99.00th=[28705], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:29:33.918 | 99.99th=[35390] 00:29:33.918 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2291.20, stdev=39.40, samples=20 00:29:33.918 iops : min= 544, max= 576, avg=572.80, stdev= 9.85, samples=20 00:29:33.918 lat (msec) : 20=0.03%, 50=99.97% 00:29:33.918 cpu : usr=98.61%, sys=1.00%, ctx=21, majf=0, minf=26 00:29:33.918 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename1: (groupid=0, jobs=1): err= 0: pid=1897112: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=579, BW=2320KiB/s (2375kB/s)(22.7MiB/10020msec) 00:29:33.918 slat (nsec): min=6876, max=80686, avg=27370.76, stdev=18849.52 00:29:33.918 clat (usec): min=13509, max=38679, avg=27356.65, stdev=2042.46 00:29:33.918 lat (usec): min=13518, max=38693, avg=27384.02, stdev=2045.51 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[18482], 5.00th=[24511], 10.00th=[27132], 20.00th=[27395], 00:29:33.918 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.918 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:29:33.918 | 99.99th=[38536] 00:29:33.918 bw ( KiB/s): min= 2176, max= 2712, per=4.21%, avg=2318.00, stdev=104.66, samples=20 00:29:33.918 iops : min= 544, max= 678, avg=579.50, stdev=26.16, samples=20 00:29:33.918 lat (msec) : 20=2.00%, 50=98.00% 00:29:33.918 cpu : usr=98.38%, sys=1.22%, ctx=21, majf=0, minf=34 00:29:33.918 IO depths : 1=5.2%, 2=10.9%, 4=23.6%, 8=52.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename2: (groupid=0, jobs=1): err= 0: pid=1897113: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10005msec) 00:29:33.918 slat (nsec): min=6743, max=55738, avg=26080.60, stdev=7762.96 00:29:33.918 clat (usec): min=20170, max=81120, avg=27806.17, stdev=2448.88 00:29:33.918 lat (usec): min=20185, max=81149, avg=27832.26, stdev=2448.18 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[25035], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.918 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.918 | 99.00th=[28967], 99.50th=[35914], 99.90th=[67634], 99.95th=[68682], 00:29:33.918 | 99.99th=[81265] 00:29:33.918 bw ( KiB/s): min= 2016, max= 2304, per=4.13%, avg=2275.37, stdev=74.59, samples=19 00:29:33.918 iops : min= 504, max= 576, avg=568.84, stdev=18.65, samples=19 00:29:33.918 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.918 cpu : usr=98.95%, sys=0.67%, ctx=15, majf=0, minf=28 00:29:33.918 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename2: (groupid=0, jobs=1): err= 0: pid=1897114: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=582, BW=2332KiB/s (2388kB/s)(22.8MiB/10011msec) 00:29:33.918 slat (nsec): min=7133, max=70381, avg=24334.24, stdev=11810.35 00:29:33.918 clat (usec): min=2098, max=39515, avg=27226.76, stdev=3412.08 00:29:33.918 lat (usec): min=2110, max=39549, avg=27251.09, stdev=3413.48 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[ 2704], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:33.918 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.918 | 99.00th=[28443], 99.50th=[28705], 99.90th=[39584], 99.95th=[39584], 00:29:33.918 | 99.99th=[39584] 00:29:33.918 bw ( KiB/s): min= 2176, max= 3040, per=4.23%, avg=2328.00, stdev=172.13, samples=20 00:29:33.918 iops : min= 544, max= 760, avg=582.00, stdev=43.03, samples=20 00:29:33.918 lat (msec) : 4=1.66%, 10=0.19%, 20=0.10%, 50=98.05% 00:29:33.918 cpu : usr=98.40%, sys=0.91%, ctx=131, majf=0, minf=49 00:29:33.918 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename2: (groupid=0, jobs=1): err= 0: pid=1897115: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10007msec) 00:29:33.918 slat (nsec): min=6441, max=57258, avg=26530.24, stdev=7727.06 00:29:33.918 clat (usec): min=21606, max=70627, avg=27786.41, stdev=2303.36 00:29:33.918 lat (usec): min=21626, max=70645, avg=27812.94, stdev=2302.53 00:29:33.918 clat percentiles (usec): 00:29:33.918 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.918 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.918 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.918 | 99.00th=[28705], 99.50th=[28967], 99.90th=[70779], 99.95th=[70779], 00:29:33.918 | 99.99th=[70779] 00:29:33.918 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2277.05, stdev=68.52, samples=19 00:29:33.918 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:29:33.918 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.918 cpu : usr=98.95%, sys=0.67%, ctx=13, majf=0, minf=26 00:29:33.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.918 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.918 filename2: (groupid=0, jobs=1): err= 0: pid=1897116: Mon Jul 15 13:04:03 2024 00:29:33.918 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.3MiB/10005msec) 00:29:33.918 slat (nsec): min=7052, max=53914, avg=25677.85, stdev=8395.72 00:29:33.919 clat (usec): min=12429, max=65759, avg=27750.43, stdev=2287.19 00:29:33.919 lat (usec): min=12440, max=65780, avg=27776.11, stdev=2287.01 00:29:33.919 clat percentiles (usec): 00:29:33.919 | 1.00th=[24511], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:29:33.919 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.919 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.919 | 99.00th=[28967], 99.50th=[34866], 99.90th=[65799], 99.95th=[65799], 00:29:33.919 | 99.99th=[65799] 00:29:33.919 bw ( KiB/s): min= 2052, max= 2320, per=4.14%, avg=2277.26, stdev=67.99, samples=19 00:29:33.919 iops : min= 513, max= 580, avg=569.32, stdev=17.00, samples=19 00:29:33.919 lat (msec) : 20=0.38%, 50=99.34%, 100=0.28% 00:29:33.919 cpu : usr=98.90%, sys=0.69%, ctx=15, majf=0, minf=26 00:29:33.919 IO depths : 1=5.7%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:33.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.919 filename2: (groupid=0, jobs=1): err= 0: pid=1897117: Mon Jul 15 13:04:03 2024 00:29:33.919 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10005msec) 00:29:33.919 slat (nsec): min=7310, max=78281, avg=25670.18, stdev=16526.70 00:29:33.919 clat (usec): min=20118, max=40782, avg=27736.57, stdev=848.28 00:29:33.919 lat (usec): min=20133, max=40815, avg=27762.24, stdev=846.56 00:29:33.919 clat percentiles (usec): 00:29:33.919 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:33.919 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.919 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:33.919 | 99.00th=[28967], 99.50th=[28967], 99.90th=[40633], 99.95th=[40633], 00:29:33.919 | 99.99th=[40633] 00:29:33.919 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2284.80, stdev=46.89, samples=20 00:29:33.919 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:29:33.919 lat (msec) : 50=100.00% 00:29:33.919 cpu : usr=98.96%, sys=0.65%, ctx=16, majf=0, minf=22 00:29:33.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.919 filename2: (groupid=0, jobs=1): err= 0: pid=1897118: Mon Jul 15 13:04:03 2024 00:29:33.919 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10002msec) 00:29:33.919 slat (usec): min=6, max=137, avg=33.53, stdev=21.23 00:29:33.919 clat (usec): min=3637, max=66186, avg=27599.93, stdev=2488.60 00:29:33.919 lat (usec): min=3644, max=66236, avg=27633.46, stdev=2489.42 00:29:33.919 clat percentiles (usec): 00:29:33.919 | 1.00th=[26346], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:33.919 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:33.919 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.919 | 99.00th=[28705], 99.50th=[28967], 99.90th=[65799], 99.95th=[66323], 00:29:33.919 | 99.99th=[66323] 00:29:33.919 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2277.05, stdev=68.52, samples=19 00:29:33.919 iops : min= 512, max= 576, avg=569.26, stdev=17.13, samples=19 00:29:33.919 lat (msec) : 4=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:29:33.919 cpu : usr=98.73%, sys=0.87%, ctx=14, majf=0, minf=24 00:29:33.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:33.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.919 filename2: (groupid=0, jobs=1): err= 0: pid=1897119: Mon Jul 15 13:04:03 2024 00:29:33.919 read: IOPS=581, BW=2325KiB/s (2380kB/s)(22.7MiB/10004msec) 00:29:33.919 slat (nsec): min=6849, max=72461, avg=14992.92, stdev=11170.98 00:29:33.919 clat (usec): min=3647, max=91091, avg=27469.14, stdev=4584.45 00:29:33.919 lat (usec): min=3655, max=91147, avg=27484.13, stdev=4584.64 00:29:33.919 clat percentiles (usec): 00:29:33.919 | 1.00th=[18220], 5.00th=[21103], 10.00th=[23462], 20.00th=[24773], 00:29:33.919 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:33.919 | 70.00th=[27919], 80.00th=[27919], 90.00th=[31589], 95.00th=[33817], 00:29:33.919 | 99.00th=[35914], 99.50th=[36439], 99.90th=[83362], 99.95th=[90702], 00:29:33.919 | 99.99th=[90702] 00:29:33.919 bw ( KiB/s): min= 1936, max= 2416, per=4.20%, avg=2311.58, stdev=97.52, samples=19 00:29:33.919 iops : min= 484, max= 604, avg=577.89, stdev=24.38, samples=19 00:29:33.919 lat (msec) : 4=0.10%, 10=0.17%, 20=2.13%, 50=97.32%, 100=0.28% 00:29:33.919 cpu : usr=98.70%, sys=0.89%, ctx=14, majf=0, minf=37 00:29:33.919 IO depths : 1=0.1%, 2=0.1%, 4=2.4%, 8=81.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:29:33.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 issued rwts: total=5814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.919 filename2: (groupid=0, jobs=1): err= 0: pid=1897120: Mon Jul 15 13:04:03 2024 00:29:33.919 read: IOPS=570, BW=2282KiB/s (2336kB/s)(22.3MiB/10014msec) 00:29:33.919 slat (nsec): min=10955, max=92784, avg=32396.17, stdev=16329.09 00:29:33.919 clat (usec): min=21690, max=80544, avg=27787.25, stdev=2413.03 00:29:33.919 lat (usec): min=21707, max=80575, avg=27819.65, stdev=2411.53 00:29:33.919 clat percentiles (usec): 00:29:33.919 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:33.919 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:33.919 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:29:33.919 | 99.00th=[28967], 99.50th=[33162], 99.90th=[70779], 99.95th=[70779], 00:29:33.919 | 99.99th=[80217] 00:29:33.919 bw ( KiB/s): min= 2032, max= 2320, per=4.14%, avg=2278.40, stdev=70.30, samples=20 00:29:33.919 iops : min= 508, max= 580, avg=569.60, stdev=17.58, samples=20 00:29:33.919 lat (msec) : 50=99.72%, 100=0.28% 00:29:33.919 cpu : usr=98.92%, sys=0.70%, ctx=10, majf=0, minf=33 00:29:33.919 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:33.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.919 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:33.919 00:29:33.919 Run status group 0 (all jobs): 00:29:33.919 READ: bw=53.7MiB/s (56.3MB/s), 2282KiB/s-2348KiB/s (2336kB/s-2404kB/s), io=538MiB (565MB), run=10002-10021msec 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.919 bdev_null0 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.919 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 [2024-07-15 13:04:03.633805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 bdev_null1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:33.920 { 00:29:33.920 "params": { 00:29:33.920 "name": "Nvme$subsystem", 00:29:33.920 "trtype": "$TEST_TRANSPORT", 00:29:33.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.920 "adrfam": "ipv4", 00:29:33.920 "trsvcid": "$NVMF_PORT", 00:29:33.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.920 "hdgst": ${hdgst:-false}, 00:29:33.920 "ddgst": ${ddgst:-false} 00:29:33.920 }, 00:29:33.920 "method": "bdev_nvme_attach_controller" 00:29:33.920 } 00:29:33.920 EOF 00:29:33.920 )") 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:33.920 { 00:29:33.920 "params": { 00:29:33.920 "name": "Nvme$subsystem", 00:29:33.920 "trtype": "$TEST_TRANSPORT", 00:29:33.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.920 "adrfam": "ipv4", 00:29:33.920 "trsvcid": "$NVMF_PORT", 00:29:33.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.920 "hdgst": ${hdgst:-false}, 00:29:33.920 "ddgst": ${ddgst:-false} 00:29:33.920 }, 00:29:33.920 "method": "bdev_nvme_attach_controller" 00:29:33.920 } 00:29:33.920 EOF 00:29:33.920 )") 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:33.920 "params": { 00:29:33.920 "name": "Nvme0", 00:29:33.920 "trtype": "tcp", 00:29:33.920 "traddr": "10.0.0.2", 00:29:33.920 "adrfam": "ipv4", 00:29:33.920 "trsvcid": "4420", 00:29:33.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:33.920 "hdgst": false, 00:29:33.920 "ddgst": false 00:29:33.920 }, 00:29:33.920 "method": "bdev_nvme_attach_controller" 00:29:33.920 },{ 00:29:33.920 "params": { 00:29:33.920 "name": "Nvme1", 00:29:33.920 "trtype": "tcp", 00:29:33.920 "traddr": "10.0.0.2", 00:29:33.920 "adrfam": "ipv4", 00:29:33.920 "trsvcid": "4420", 00:29:33.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:33.920 "hdgst": false, 00:29:33.920 "ddgst": false 00:29:33.920 }, 00:29:33.920 "method": "bdev_nvme_attach_controller" 00:29:33.920 }' 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:33.920 13:04:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:33.920 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:33.920 ... 00:29:33.920 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:33.920 ... 00:29:33.920 fio-3.35 00:29:33.920 Starting 4 threads 00:29:33.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.183 00:29:39.183 filename0: (groupid=0, jobs=1): err= 0: pid=1899068: Mon Jul 15 13:04:09 2024 00:29:39.183 read: IOPS=2589, BW=20.2MiB/s (21.2MB/s)(101MiB/5004msec) 00:29:39.183 slat (nsec): min=6184, max=40027, avg=9340.97, stdev=3172.17 00:29:39.183 clat (usec): min=839, max=6462, avg=3061.38, stdev=571.10 00:29:39.183 lat (usec): min=846, max=6472, avg=3070.72, stdev=570.86 00:29:39.183 clat percentiles (usec): 00:29:39.183 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2671], 00:29:39.183 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3032], 00:29:39.183 | 70.00th=[ 3097], 80.00th=[ 3326], 90.00th=[ 3916], 95.00th=[ 4359], 00:29:39.183 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 6128], 00:29:39.183 | 99.99th=[ 6456] 00:29:39.183 bw ( KiB/s): min=19936, max=21904, per=24.87%, avg=20743.11, stdev=561.95, samples=9 00:29:39.183 iops : min= 2492, max= 2738, avg=2592.89, stdev=70.24, samples=9 00:29:39.183 lat (usec) : 1000=0.02% 00:29:39.183 lat (msec) : 2=0.71%, 4=90.24%, 10=9.04% 00:29:39.184 cpu : usr=95.72%, sys=3.96%, ctx=7, majf=0, minf=0 00:29:39.184 IO depths : 1=0.2%, 2=3.8%, 4=68.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:39.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 issued rwts: total=12959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.184 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:39.184 filename0: (groupid=0, jobs=1): err= 0: pid=1899069: Mon Jul 15 13:04:09 2024 00:29:39.184 read: IOPS=2712, BW=21.2MiB/s (22.2MB/s)(106MiB/5003msec) 00:29:39.184 slat (nsec): min=6166, max=38345, avg=9260.91, stdev=3160.08 00:29:39.184 clat (usec): min=941, max=5368, avg=2921.70, stdev=562.43 00:29:39.184 lat (usec): min=962, max=5382, avg=2930.96, stdev=562.03 00:29:39.184 clat percentiles (usec): 00:29:39.184 | 1.00th=[ 1762], 5.00th=[ 2147], 10.00th=[ 2343], 20.00th=[ 2540], 00:29:39.184 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:29:39.184 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3654], 95.00th=[ 4178], 00:29:39.184 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5276], 00:29:39.184 | 99.99th=[ 5342] 00:29:39.184 bw ( KiB/s): min=20864, max=22944, per=25.96%, avg=21656.89, stdev=651.70, samples=9 00:29:39.184 iops : min= 2608, max= 2868, avg=2707.11, stdev=81.46, samples=9 00:29:39.184 lat (usec) : 1000=0.06% 00:29:39.184 lat (msec) : 2=2.44%, 4=90.47%, 10=7.03% 00:29:39.184 cpu : usr=96.38%, sys=3.26%, ctx=8, majf=0, minf=0 00:29:39.184 IO depths : 1=0.2%, 2=4.5%, 4=67.5%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:39.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 issued rwts: total=13570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.184 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:39.184 filename1: (groupid=0, jobs=1): err= 0: pid=1899070: Mon Jul 15 13:04:09 2024 00:29:39.184 read: IOPS=2536, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5004msec) 00:29:39.184 slat (nsec): min=6171, max=37744, avg=9321.11, stdev=3192.98 00:29:39.184 clat (usec): min=1364, max=6572, avg=3127.12, stdev=570.70 00:29:39.184 lat (usec): min=1375, max=6579, avg=3136.44, stdev=570.47 00:29:39.184 clat percentiles (usec): 00:29:39.184 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:29:39.184 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:29:39.184 | 70.00th=[ 3195], 80.00th=[ 3425], 90.00th=[ 3949], 95.00th=[ 4359], 00:29:39.184 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5800], 99.95th=[ 6063], 00:29:39.184 | 99.99th=[ 6587] 00:29:39.184 bw ( KiB/s): min=19152, max=21584, per=24.35%, avg=20312.70, stdev=724.73, samples=10 00:29:39.184 iops : min= 2394, max= 2698, avg=2539.00, stdev=90.65, samples=10 00:29:39.184 lat (msec) : 2=0.57%, 4=90.26%, 10=9.17% 00:29:39.184 cpu : usr=95.94%, sys=3.68%, ctx=14, majf=0, minf=9 00:29:39.184 IO depths : 1=0.4%, 2=2.3%, 4=68.9%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:39.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 issued rwts: total=12694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.184 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:39.184 filename1: (groupid=0, jobs=1): err= 0: pid=1899071: Mon Jul 15 13:04:09 2024 00:29:39.184 read: IOPS=2593, BW=20.3MiB/s (21.2MB/s)(101MiB/5007msec) 00:29:39.184 slat (nsec): min=6183, max=27581, avg=9130.24, stdev=2969.22 00:29:39.184 clat (usec): min=971, max=44198, avg=3056.53, stdev=1173.32 00:29:39.184 lat (usec): min=984, max=44219, avg=3065.66, stdev=1173.19 00:29:39.184 clat percentiles (usec): 00:29:39.184 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:29:39.184 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3032], 00:29:39.184 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3884], 95.00th=[ 4293], 00:29:39.184 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 6063], 99.95th=[44303], 00:29:39.184 | 99.99th=[44303] 00:29:39.184 bw ( KiB/s): min=19648, max=21312, per=24.90%, avg=20771.20, stdev=473.97, samples=10 00:29:39.184 iops : min= 2456, max= 2664, avg=2596.40, stdev=59.25, samples=10 00:29:39.184 lat (usec) : 1000=0.01% 00:29:39.184 lat (msec) : 2=1.29%, 4=89.73%, 10=8.90%, 50=0.06% 00:29:39.184 cpu : usr=95.41%, sys=4.25%, ctx=10, majf=0, minf=0 00:29:39.184 IO depths : 1=0.4%, 2=3.2%, 4=68.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:39.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.184 issued rwts: total=12985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.184 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:39.184 00:29:39.184 Run status group 0 (all jobs): 00:29:39.184 READ: bw=81.5MiB/s (85.4MB/s), 19.8MiB/s-21.2MiB/s (20.8MB/s-22.2MB/s), io=408MiB (428MB), run=5003-5007msec 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.184 00:29:39.184 real 0m24.599s 00:29:39.184 user 4m51.879s 00:29:39.184 sys 0m4.375s 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.184 13:04:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:39.184 ************************************ 00:29:39.184 END TEST fio_dif_rand_params 00:29:39.184 ************************************ 00:29:39.184 13:04:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:39.184 13:04:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:39.184 13:04:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:39.184 13:04:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.184 13:04:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:39.443 ************************************ 00:29:39.443 START TEST fio_dif_digest 00:29:39.443 ************************************ 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:39.443 bdev_null0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:39.443 [2024-07-15 13:04:10.197212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.443 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.444 { 00:29:39.444 "params": { 00:29:39.444 "name": "Nvme$subsystem", 00:29:39.444 "trtype": "$TEST_TRANSPORT", 00:29:39.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.444 "adrfam": "ipv4", 00:29:39.444 "trsvcid": "$NVMF_PORT", 00:29:39.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.444 "hdgst": ${hdgst:-false}, 00:29:39.444 "ddgst": ${ddgst:-false} 00:29:39.444 }, 00:29:39.444 "method": "bdev_nvme_attach_controller" 00:29:39.444 } 00:29:39.444 EOF 00:29:39.444 )") 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:39.444 "params": { 00:29:39.444 "name": "Nvme0", 00:29:39.444 "trtype": "tcp", 00:29:39.444 "traddr": "10.0.0.2", 00:29:39.444 "adrfam": "ipv4", 00:29:39.444 "trsvcid": "4420", 00:29:39.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.444 "hdgst": true, 00:29:39.444 "ddgst": true 00:29:39.444 }, 00:29:39.444 "method": "bdev_nvme_attach_controller" 00:29:39.444 }' 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:39.444 13:04:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.702 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:39.702 ... 00:29:39.702 fio-3.35 00:29:39.702 Starting 3 threads 00:29:39.702 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.908 00:29:51.908 filename0: (groupid=0, jobs=1): err= 0: pid=1900175: Mon Jul 15 13:04:21 2024 00:29:51.908 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(354MiB/10045msec) 00:29:51.908 slat (nsec): min=6528, max=25407, avg=11525.09, stdev=2186.87 00:29:51.908 clat (usec): min=8141, max=53832, avg=10608.08, stdev=1309.59 00:29:51.908 lat (usec): min=8154, max=53843, avg=10619.60, stdev=1309.57 00:29:51.908 clat percentiles (usec): 00:29:51.908 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:29:51.908 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:29:51.908 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:51.908 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13304], 99.95th=[45876], 00:29:51.908 | 99.99th=[53740] 00:29:51.908 bw ( KiB/s): min=35072, max=36864, per=34.46%, avg=36236.80, stdev=578.28, samples=20 00:29:51.908 iops : min= 274, max= 288, avg=283.10, stdev= 4.52, samples=20 00:29:51.908 lat (msec) : 10=21.85%, 20=78.08%, 50=0.04%, 100=0.04% 00:29:51.908 cpu : usr=94.58%, sys=5.11%, ctx=27, majf=0, minf=120 00:29:51.908 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.908 issued rwts: total=2833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.908 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:51.908 filename0: (groupid=0, jobs=1): err= 0: pid=1900176: Mon Jul 15 13:04:21 2024 00:29:51.908 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(338MiB/10045msec) 00:29:51.908 slat (nsec): min=6468, max=35940, avg=11583.10, stdev=2065.10 00:29:51.908 clat (usec): min=8268, max=46589, avg=11131.40, stdev=1263.53 00:29:51.908 lat (usec): min=8280, max=46602, avg=11142.98, stdev=1263.51 00:29:51.908 clat percentiles (usec): 00:29:51.908 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:29:51.908 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:29:51.908 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:29:51.908 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13960], 99.95th=[45351], 00:29:51.908 | 99.99th=[46400] 00:29:51.908 bw ( KiB/s): min=33536, max=35328, per=32.84%, avg=34534.40, stdev=511.33, samples=20 00:29:51.908 iops : min= 262, max= 276, avg=269.80, stdev= 3.99, samples=20 00:29:51.908 lat (msec) : 10=8.11%, 20=91.81%, 50=0.07% 00:29:51.908 cpu : usr=94.29%, sys=5.41%, ctx=18, majf=0, minf=192 00:29:51.908 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.908 issued rwts: total=2700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.908 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:51.908 filename0: (groupid=0, jobs=1): err= 0: pid=1900177: Mon Jul 15 13:04:21 2024 00:29:51.908 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(340MiB/10045msec) 00:29:51.908 slat (nsec): min=6465, max=22271, avg=11477.29, stdev=2137.76 00:29:51.908 clat (usec): min=7408, max=47594, avg=11049.67, stdev=1274.81 00:29:51.908 lat (usec): min=7421, max=47603, avg=11061.14, stdev=1274.77 00:29:51.908 clat percentiles (usec): 00:29:51.908 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:29:51.908 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:29:51.908 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:29:51.909 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14222], 99.95th=[45876], 00:29:51.909 | 99.99th=[47449] 00:29:51.909 bw ( KiB/s): min=33792, max=36352, per=33.08%, avg=34790.40, stdev=556.55, samples=20 00:29:51.909 iops : min= 264, max= 284, avg=271.80, stdev= 4.35, samples=20 00:29:51.909 lat (msec) : 10=10.88%, 20=89.04%, 50=0.07% 00:29:51.909 cpu : usr=94.18%, sys=5.52%, ctx=23, majf=0, minf=73 00:29:51.909 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.909 issued rwts: total=2720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.909 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:51.909 00:29:51.909 Run status group 0 (all jobs): 00:29:51.909 READ: bw=103MiB/s (108MB/s), 33.6MiB/s-35.3MiB/s (35.2MB/s-37.0MB/s), io=1032MiB (1082MB), run=10045-10045msec 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.909 00:29:51.909 real 0m11.109s 00:29:51.909 user 0m35.281s 00:29:51.909 sys 0m1.918s 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:51.909 13:04:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.909 ************************************ 00:29:51.909 END TEST fio_dif_digest 00:29:51.909 ************************************ 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:51.909 13:04:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:51.909 13:04:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:51.909 rmmod nvme_tcp 00:29:51.909 rmmod nvme_fabrics 00:29:51.909 rmmod nvme_keyring 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1891532 ']' 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1891532 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1891532 ']' 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1891532 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891532 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891532' 00:29:51.909 killing process with pid 1891532 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1891532 00:29:51.909 13:04:21 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1891532 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:51.909 13:04:21 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:53.347 Waiting for block devices as requested 00:29:53.347 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:53.606 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:53.606 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:53.606 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:53.864 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:53.864 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:53.864 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:54.122 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:54.122 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:54.122 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:54.380 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:54.380 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:54.380 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:54.380 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:54.638 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:54.638 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:54.638 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:54.897 13:04:25 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.897 13:04:25 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.897 13:04:25 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.897 13:04:25 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.897 13:04:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.897 13:04:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:54.897 13:04:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.800 13:04:27 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.800 00:29:56.800 real 1m14.364s 00:29:56.800 user 7m10.584s 00:29:56.801 sys 0m18.911s 00:29:56.801 13:04:27 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.801 13:04:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:56.801 ************************************ 00:29:56.801 END TEST nvmf_dif 00:29:56.801 ************************************ 00:29:56.801 13:04:27 -- common/autotest_common.sh@1142 -- # return 0 00:29:56.801 13:04:27 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:56.801 13:04:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:56.801 13:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.801 13:04:27 -- common/autotest_common.sh@10 -- # set +x 00:29:57.060 ************************************ 00:29:57.060 START TEST nvmf_abort_qd_sizes 00:29:57.060 ************************************ 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:57.060 * Looking for test storage... 00:29:57.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.060 13:04:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.647 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.648 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.648 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.648 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.648 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:03.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:30:03.648 00:30:03.648 --- 10.0.0.2 ping statistics --- 00:30:03.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.648 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:30:03.648 00:30:03.648 --- 10.0.0.1 ping statistics --- 00:30:03.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.648 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:30:03.648 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.649 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:03.649 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:03.649 13:04:33 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:05.554 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:05.554 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:06.489 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:06.489 13:04:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1908129 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1908129 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1908129 ']' 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.490 13:04:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:06.748 [2024-07-15 13:04:37.459477] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:06.748 [2024-07-15 13:04:37.459535] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.748 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.748 [2024-07-15 13:04:37.532856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.748 [2024-07-15 13:04:37.614415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.748 [2024-07-15 13:04:37.614452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.748 [2024-07-15 13:04:37.614459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.748 [2024-07-15 13:04:37.614465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.748 [2024-07-15 13:04:37.614470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.748 [2024-07-15 13:04:37.614513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.748 [2024-07-15 13:04:37.614619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.748 [2024-07-15 13:04:37.614728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.748 [2024-07-15 13:04:37.614729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.677 13:04:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:07.677 ************************************ 00:30:07.677 START TEST spdk_target_abort 00:30:07.677 ************************************ 00:30:07.677 13:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:07.677 13:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:07.677 13:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:07.677 13:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.677 13:04:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.948 spdk_targetn1 00:30:10.948 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.948 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.948 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.948 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.948 [2024-07-15 13:04:41.196366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.948 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.949 [2024-07-15 13:04:41.225334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:10.949 13:04:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:10.949 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.222 Initializing NVMe Controllers 00:30:14.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:14.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:14.222 Initialization complete. Launching workers. 00:30:14.222 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15239, failed: 0 00:30:14.222 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1364, failed to submit 13875 00:30:14.222 success 727, unsuccess 637, failed 0 00:30:14.222 13:04:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:14.222 13:04:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:14.222 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.534 Initializing NVMe Controllers 00:30:17.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:17.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:17.534 Initialization complete. Launching workers. 00:30:17.534 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8756, failed: 0 00:30:17.534 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1258, failed to submit 7498 00:30:17.534 success 331, unsuccess 927, failed 0 00:30:17.534 13:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:17.534 13:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:17.534 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.057 Initializing NVMe Controllers 00:30:20.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:20.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:20.057 Initialization complete. Launching workers. 00:30:20.057 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37686, failed: 0 00:30:20.057 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2893, failed to submit 34793 00:30:20.057 success 618, unsuccess 2275, failed 0 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.057 13:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1908129 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1908129 ']' 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1908129 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1908129 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1908129' 00:30:21.425 killing process with pid 1908129 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1908129 00:30:21.425 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1908129 00:30:21.684 00:30:21.684 real 0m14.168s 00:30:21.684 user 0m56.473s 00:30:21.684 sys 0m2.308s 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.684 ************************************ 00:30:21.684 END TEST spdk_target_abort 00:30:21.684 ************************************ 00:30:21.684 13:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:21.684 13:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:21.684 13:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:21.684 13:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.684 13:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:21.684 ************************************ 00:30:21.684 START TEST kernel_target_abort 00:30:21.684 ************************************ 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:21.684 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:21.685 13:04:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:24.976 Waiting for block devices as requested 00:30:24.976 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:24.976 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:24.976 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:24.976 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:24.976 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:24.976 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:24.976 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:24.976 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:25.235 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:25.235 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:25.235 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:25.235 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:25.495 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:25.495 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:25.495 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:25.754 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:25.754 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:25.754 No valid GPT data, bailing 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:25.754 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:26.014 00:30:26.014 Discovery Log Number of Records 2, Generation counter 2 00:30:26.014 =====Discovery Log Entry 0====== 00:30:26.014 trtype: tcp 00:30:26.014 adrfam: ipv4 00:30:26.014 subtype: current discovery subsystem 00:30:26.014 treq: not specified, sq flow control disable supported 00:30:26.014 portid: 1 00:30:26.014 trsvcid: 4420 00:30:26.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:26.014 traddr: 10.0.0.1 00:30:26.014 eflags: none 00:30:26.014 sectype: none 00:30:26.014 =====Discovery Log Entry 1====== 00:30:26.014 trtype: tcp 00:30:26.014 adrfam: ipv4 00:30:26.014 subtype: nvme subsystem 00:30:26.014 treq: not specified, sq flow control disable supported 00:30:26.014 portid: 1 00:30:26.014 trsvcid: 4420 00:30:26.014 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:26.014 traddr: 10.0.0.1 00:30:26.014 eflags: none 00:30:26.014 sectype: none 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:26.014 13:04:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.014 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.299 Initializing NVMe Controllers 00:30:29.299 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:29.299 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:29.299 Initialization complete. Launching workers. 00:30:29.300 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84787, failed: 0 00:30:29.300 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 84787, failed to submit 0 00:30:29.300 success 0, unsuccess 84787, failed 0 00:30:29.300 13:04:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:29.300 13:04:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:29.300 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.586 Initializing NVMe Controllers 00:30:32.586 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:32.586 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:32.586 Initialization complete. Launching workers. 00:30:32.586 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 135786, failed: 0 00:30:32.586 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33710, failed to submit 102076 00:30:32.586 success 0, unsuccess 33710, failed 0 00:30:32.586 13:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:32.586 13:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:32.586 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.874 Initializing NVMe Controllers 00:30:35.874 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.874 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.874 Initialization complete. Launching workers. 00:30:35.874 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131755, failed: 0 00:30:35.874 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32990, failed to submit 98765 00:30:35.874 success 0, unsuccess 32990, failed 0 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:35.874 13:05:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:38.412 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:38.412 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:38.980 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:39.272 00:30:39.272 real 0m17.360s 00:30:39.272 user 0m8.499s 00:30:39.272 sys 0m5.109s 00:30:39.272 13:05:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:39.272 13:05:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:39.272 ************************************ 00:30:39.272 END TEST kernel_target_abort 00:30:39.272 ************************************ 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:39.272 13:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:39.272 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:39.272 rmmod nvme_tcp 00:30:39.272 rmmod nvme_fabrics 00:30:39.272 rmmod nvme_keyring 00:30:39.272 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:39.272 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:39.272 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:39.272 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1908129 ']' 00:30:39.272 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1908129 00:30:39.273 13:05:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1908129 ']' 00:30:39.273 13:05:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1908129 00:30:39.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1908129) - No such process 00:30:39.273 13:05:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1908129 is not found' 00:30:39.273 Process with pid 1908129 is not found 00:30:39.273 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:39.273 13:05:10 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:41.805 Waiting for block devices as requested 00:30:41.805 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:42.064 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:42.064 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:42.323 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:42.323 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:42.323 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:42.323 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:42.583 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:42.583 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:42.583 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:42.842 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:42.842 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:42.842 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:42.842 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:43.100 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:43.101 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:43.101 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:43.359 13:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.261 13:05:16 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:45.261 00:30:45.261 real 0m48.406s 00:30:45.261 user 1m9.144s 00:30:45.261 sys 0m16.027s 00:30:45.261 13:05:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:45.261 13:05:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:45.261 ************************************ 00:30:45.261 END TEST nvmf_abort_qd_sizes 00:30:45.261 ************************************ 00:30:45.261 13:05:16 -- common/autotest_common.sh@1142 -- # return 0 00:30:45.261 13:05:16 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:45.261 13:05:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:45.261 13:05:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:45.261 13:05:16 -- common/autotest_common.sh@10 -- # set +x 00:30:45.520 ************************************ 00:30:45.520 START TEST keyring_file 00:30:45.520 ************************************ 00:30:45.520 13:05:16 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:45.520 * Looking for test storage... 00:30:45.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:45.520 13:05:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:45.520 13:05:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.520 13:05:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.520 13:05:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.520 13:05:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.520 13:05:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.520 13:05:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.520 13:05:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.520 13:05:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:45.520 13:05:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.520 13:05:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QRHveOV35k 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QRHveOV35k 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QRHveOV35k 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QRHveOV35k 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3UrqHKgd0z 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:45.521 13:05:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3UrqHKgd0z 00:30:45.521 13:05:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3UrqHKgd0z 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3UrqHKgd0z 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=1916904 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1916904 00:30:45.521 13:05:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:45.521 13:05:16 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1916904 ']' 00:30:45.521 13:05:16 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.521 13:05:16 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:45.521 13:05:16 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.521 13:05:16 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:45.521 13:05:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:45.780 [2024-07-15 13:05:16.496793] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:45.780 [2024-07-15 13:05:16.496843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916904 ] 00:30:45.780 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.780 [2024-07-15 13:05:16.556561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.780 [2024-07-15 13:05:16.636452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.347 13:05:17 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.347 13:05:17 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:46.347 13:05:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:46.347 13:05:17 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.347 13:05:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:46.347 [2024-07-15 13:05:17.297626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.606 null0 00:30:46.606 [2024-07-15 13:05:17.329673] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:46.606 [2024-07-15 13:05:17.329909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:46.606 [2024-07-15 13:05:17.337689] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.606 13:05:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:46.606 [2024-07-15 13:05:17.349721] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:46.606 request: 00:30:46.606 { 00:30:46.606 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.606 "secure_channel": false, 00:30:46.606 "listen_address": { 00:30:46.606 "trtype": "tcp", 00:30:46.606 "traddr": "127.0.0.1", 00:30:46.606 "trsvcid": "4420" 00:30:46.606 }, 00:30:46.606 "method": "nvmf_subsystem_add_listener", 00:30:46.606 "req_id": 1 00:30:46.606 } 00:30:46.606 Got JSON-RPC error response 00:30:46.606 response: 00:30:46.606 { 00:30:46.606 "code": -32602, 00:30:46.606 "message": "Invalid parameters" 00:30:46.606 } 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:46.606 13:05:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=1917033 00:30:46.606 13:05:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:46.606 13:05:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1917033 /var/tmp/bperf.sock 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1917033 ']' 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:46.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.606 13:05:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:46.606 [2024-07-15 13:05:17.396466] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:46.606 [2024-07-15 13:05:17.396507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917033 ] 00:30:46.606 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.606 [2024-07-15 13:05:17.463968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.606 [2024-07-15 13:05:17.544007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.542 13:05:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.542 13:05:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:47.543 13:05:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:47.543 13:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:47.543 13:05:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3UrqHKgd0z 00:30:47.543 13:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3UrqHKgd0z 00:30:47.801 13:05:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:47.801 13:05:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:47.801 13:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:47.801 13:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:47.801 13:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:47.801 13:05:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.QRHveOV35k == \/\t\m\p\/\t\m\p\.\Q\R\H\v\e\O\V\3\5\k ]] 00:30:47.801 13:05:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:47.801 13:05:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:47.801 13:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:47.801 13:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:47.801 13:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.059 13:05:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3UrqHKgd0z == \/\t\m\p\/\t\m\p\.\3\U\r\q\H\K\g\d\0\z ]] 00:30:48.059 13:05:18 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:48.059 13:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.059 13:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.059 13:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.059 13:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.059 13:05:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.317 13:05:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:48.317 13:05:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:48.317 13:05:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.317 13:05:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.317 13:05:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.317 13:05:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.317 13:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.575 13:05:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:48.575 13:05:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.575 13:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.575 [2024-07-15 13:05:19.430329] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:48.575 nvme0n1 00:30:48.575 13:05:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:48.575 13:05:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.575 13:05:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.575 13:05:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.575 13:05:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.575 13:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.833 13:05:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:48.833 13:05:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:48.833 13:05:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:48.833 13:05:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.833 13:05:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.833 13:05:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:48.833 13:05:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.090 13:05:19 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:49.090 13:05:19 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:49.090 Running I/O for 1 seconds... 00:30:50.464 00:30:50.464 Latency(us) 00:30:50.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.464 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:50.464 nvme0n1 : 1.00 15381.23 60.08 0.00 0.00 8298.32 4673.00 16982.37 00:30:50.464 =================================================================================================================== 00:30:50.464 Total : 15381.23 60.08 0.00 0.00 8298.32 4673.00 16982.37 00:30:50.464 0 00:30:50.464 13:05:20 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:50.464 13:05:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:50.464 13:05:21 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.464 13:05:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:50.464 13:05:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:50.464 13:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.722 13:05:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:50.722 13:05:21 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.722 13:05:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:50.722 13:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:50.981 [2024-07-15 13:05:21.744051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:50.981 [2024-07-15 13:05:21.744585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258b770 (107): Transport endpoint is not connected 00:30:50.981 [2024-07-15 13:05:21.745580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258b770 (9): Bad file descriptor 00:30:50.981 [2024-07-15 13:05:21.746580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.981 [2024-07-15 13:05:21.746592] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:50.981 [2024-07-15 13:05:21.746599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.981 request: 00:30:50.981 { 00:30:50.981 "name": "nvme0", 00:30:50.981 "trtype": "tcp", 00:30:50.981 "traddr": "127.0.0.1", 00:30:50.981 "adrfam": "ipv4", 00:30:50.981 "trsvcid": "4420", 00:30:50.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:50.981 "prchk_reftag": false, 00:30:50.981 "prchk_guard": false, 00:30:50.981 "hdgst": false, 00:30:50.981 "ddgst": false, 00:30:50.981 "psk": "key1", 00:30:50.981 "method": "bdev_nvme_attach_controller", 00:30:50.981 "req_id": 1 00:30:50.981 } 00:30:50.981 Got JSON-RPC error response 00:30:50.981 response: 00:30:50.981 { 00:30:50.981 "code": -5, 00:30:50.981 "message": "Input/output error" 00:30:50.981 } 00:30:50.981 13:05:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:50.981 13:05:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:50.981 13:05:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:50.981 13:05:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:50.981 13:05:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:50.981 13:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:50.981 13:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:50.981 13:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:50.981 13:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:50.981 13:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:51.240 13:05:21 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:51.240 13:05:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:51.240 13:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:51.240 13:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:51.240 13:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.240 13:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:51.240 13:05:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.240 13:05:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:51.240 13:05:22 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:51.240 13:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:51.499 13:05:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:51.499 13:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:51.758 13:05:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:51.758 13:05:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:51.758 13:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.758 13:05:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:51.758 13:05:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.QRHveOV35k 00:30:51.758 13:05:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.758 13:05:22 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:51.758 13:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:52.017 [2024-07-15 13:05:22.836749] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QRHveOV35k': 0100660 00:30:52.017 [2024-07-15 13:05:22.836774] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:52.017 request: 00:30:52.017 { 00:30:52.017 "name": "key0", 00:30:52.017 "path": "/tmp/tmp.QRHveOV35k", 00:30:52.017 "method": "keyring_file_add_key", 00:30:52.017 "req_id": 1 00:30:52.017 } 00:30:52.017 Got JSON-RPC error response 00:30:52.017 response: 00:30:52.017 { 00:30:52.017 "code": -1, 00:30:52.017 "message": "Operation not permitted" 00:30:52.017 } 00:30:52.017 13:05:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:52.017 13:05:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:52.017 13:05:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:52.017 13:05:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:52.017 13:05:22 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.QRHveOV35k 00:30:52.017 13:05:22 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:52.017 13:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QRHveOV35k 00:30:52.276 13:05:23 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.QRHveOV35k 00:30:52.276 13:05:23 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:52.276 13:05:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:52.276 13:05:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:52.276 13:05:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.276 13:05:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:52.276 13:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.276 13:05:23 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:52.276 13:05:23 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:52.276 13:05:23 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.276 13:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.536 [2024-07-15 13:05:23.378179] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QRHveOV35k': No such file or directory 00:30:52.536 [2024-07-15 13:05:23.378197] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:52.536 [2024-07-15 13:05:23.378218] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:52.536 [2024-07-15 13:05:23.378227] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:52.536 [2024-07-15 13:05:23.378233] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:52.536 request: 00:30:52.536 { 00:30:52.536 "name": "nvme0", 00:30:52.536 "trtype": "tcp", 00:30:52.536 "traddr": "127.0.0.1", 00:30:52.536 "adrfam": "ipv4", 00:30:52.536 "trsvcid": "4420", 00:30:52.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.536 "prchk_reftag": false, 00:30:52.536 "prchk_guard": false, 00:30:52.536 "hdgst": false, 00:30:52.536 "ddgst": false, 00:30:52.536 "psk": "key0", 00:30:52.536 "method": "bdev_nvme_attach_controller", 00:30:52.536 "req_id": 1 00:30:52.536 } 00:30:52.536 Got JSON-RPC error response 00:30:52.536 response: 00:30:52.536 { 00:30:52.536 "code": -19, 00:30:52.536 "message": "No such device" 00:30:52.536 } 00:30:52.536 13:05:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:52.536 13:05:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:52.536 13:05:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:52.536 13:05:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:52.536 13:05:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:52.536 13:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:52.795 13:05:23 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LVesLLjjyP 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:52.795 13:05:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:52.795 13:05:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:52.795 13:05:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:52.795 13:05:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:52.795 13:05:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:52.795 13:05:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LVesLLjjyP 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LVesLLjjyP 00:30:52.795 13:05:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.LVesLLjjyP 00:30:52.795 13:05:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LVesLLjjyP 00:30:52.795 13:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LVesLLjjyP 00:30:53.054 13:05:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:53.054 13:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:53.313 nvme0n1 00:30:53.313 13:05:24 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:53.313 13:05:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.313 13:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.313 13:05:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.313 13:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.313 13:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.313 13:05:24 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:53.313 13:05:24 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:53.313 13:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:53.572 13:05:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:53.572 13:05:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:53.572 13:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.572 13:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.572 13:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.831 13:05:24 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:53.831 13:05:24 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:53.831 13:05:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.831 13:05:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.831 13:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.831 13:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.831 13:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.831 13:05:24 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:53.831 13:05:24 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:53.831 13:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:54.090 13:05:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:54.090 13:05:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:54.090 13:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.348 13:05:25 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:54.348 13:05:25 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LVesLLjjyP 00:30:54.348 13:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LVesLLjjyP 00:30:54.348 13:05:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3UrqHKgd0z 00:30:54.348 13:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3UrqHKgd0z 00:30:54.606 13:05:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.606 13:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:54.864 nvme0n1 00:30:54.864 13:05:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:54.864 13:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:55.187 13:05:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:55.187 "subsystems": [ 00:30:55.187 { 00:30:55.187 "subsystem": "keyring", 00:30:55.187 "config": [ 00:30:55.187 { 00:30:55.187 "method": "keyring_file_add_key", 00:30:55.187 "params": { 00:30:55.187 "name": "key0", 00:30:55.187 "path": "/tmp/tmp.LVesLLjjyP" 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "keyring_file_add_key", 00:30:55.187 "params": { 00:30:55.187 "name": "key1", 00:30:55.187 "path": "/tmp/tmp.3UrqHKgd0z" 00:30:55.187 } 00:30:55.187 } 00:30:55.187 ] 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "subsystem": "iobuf", 00:30:55.187 "config": [ 00:30:55.187 { 00:30:55.187 "method": "iobuf_set_options", 00:30:55.187 "params": { 00:30:55.187 "small_pool_count": 8192, 00:30:55.187 "large_pool_count": 1024, 00:30:55.187 "small_bufsize": 8192, 00:30:55.187 "large_bufsize": 135168 00:30:55.187 } 00:30:55.187 } 00:30:55.187 ] 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "subsystem": "sock", 00:30:55.187 "config": [ 00:30:55.187 { 00:30:55.187 "method": "sock_set_default_impl", 00:30:55.187 "params": { 00:30:55.187 "impl_name": "posix" 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "sock_impl_set_options", 00:30:55.187 "params": { 00:30:55.187 "impl_name": "ssl", 00:30:55.187 "recv_buf_size": 4096, 00:30:55.187 "send_buf_size": 4096, 00:30:55.187 "enable_recv_pipe": true, 00:30:55.187 "enable_quickack": false, 00:30:55.187 "enable_placement_id": 0, 00:30:55.187 "enable_zerocopy_send_server": true, 00:30:55.187 "enable_zerocopy_send_client": false, 00:30:55.187 "zerocopy_threshold": 0, 00:30:55.187 "tls_version": 0, 00:30:55.187 "enable_ktls": false 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "sock_impl_set_options", 00:30:55.187 "params": { 00:30:55.187 "impl_name": "posix", 00:30:55.187 "recv_buf_size": 2097152, 00:30:55.187 "send_buf_size": 2097152, 00:30:55.187 "enable_recv_pipe": true, 00:30:55.187 "enable_quickack": false, 00:30:55.187 "enable_placement_id": 0, 00:30:55.187 "enable_zerocopy_send_server": true, 00:30:55.187 "enable_zerocopy_send_client": false, 00:30:55.187 "zerocopy_threshold": 0, 00:30:55.187 "tls_version": 0, 00:30:55.187 "enable_ktls": false 00:30:55.187 } 00:30:55.187 } 00:30:55.187 ] 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "subsystem": "vmd", 00:30:55.187 "config": [] 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "subsystem": "accel", 00:30:55.187 "config": [ 00:30:55.187 { 00:30:55.187 "method": "accel_set_options", 00:30:55.187 "params": { 00:30:55.187 "small_cache_size": 128, 00:30:55.187 "large_cache_size": 16, 00:30:55.187 "task_count": 2048, 00:30:55.187 "sequence_count": 2048, 00:30:55.187 "buf_count": 2048 00:30:55.187 } 00:30:55.187 } 00:30:55.187 ] 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "subsystem": "bdev", 00:30:55.187 "config": [ 00:30:55.187 { 00:30:55.187 "method": "bdev_set_options", 00:30:55.187 "params": { 00:30:55.187 "bdev_io_pool_size": 65535, 00:30:55.187 "bdev_io_cache_size": 256, 00:30:55.187 "bdev_auto_examine": true, 00:30:55.187 "iobuf_small_cache_size": 128, 00:30:55.187 "iobuf_large_cache_size": 16 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "bdev_raid_set_options", 00:30:55.187 "params": { 00:30:55.187 "process_window_size_kb": 1024 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "bdev_iscsi_set_options", 00:30:55.187 "params": { 00:30:55.187 "timeout_sec": 30 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "bdev_nvme_set_options", 00:30:55.187 "params": { 00:30:55.187 "action_on_timeout": "none", 00:30:55.187 "timeout_us": 0, 00:30:55.187 "timeout_admin_us": 0, 00:30:55.187 "keep_alive_timeout_ms": 10000, 00:30:55.187 "arbitration_burst": 0, 00:30:55.187 "low_priority_weight": 0, 00:30:55.187 "medium_priority_weight": 0, 00:30:55.187 "high_priority_weight": 0, 00:30:55.187 "nvme_adminq_poll_period_us": 10000, 00:30:55.187 "nvme_ioq_poll_period_us": 0, 00:30:55.187 "io_queue_requests": 512, 00:30:55.187 "delay_cmd_submit": true, 00:30:55.187 "transport_retry_count": 4, 00:30:55.187 "bdev_retry_count": 3, 00:30:55.187 "transport_ack_timeout": 0, 00:30:55.187 "ctrlr_loss_timeout_sec": 0, 00:30:55.187 "reconnect_delay_sec": 0, 00:30:55.187 "fast_io_fail_timeout_sec": 0, 00:30:55.187 "disable_auto_failback": false, 00:30:55.187 "generate_uuids": false, 00:30:55.187 "transport_tos": 0, 00:30:55.187 "nvme_error_stat": false, 00:30:55.187 "rdma_srq_size": 0, 00:30:55.187 "io_path_stat": false, 00:30:55.187 "allow_accel_sequence": false, 00:30:55.187 "rdma_max_cq_size": 0, 00:30:55.187 "rdma_cm_event_timeout_ms": 0, 00:30:55.187 "dhchap_digests": [ 00:30:55.187 "sha256", 00:30:55.187 "sha384", 00:30:55.187 "sha512" 00:30:55.187 ], 00:30:55.187 "dhchap_dhgroups": [ 00:30:55.187 "null", 00:30:55.187 "ffdhe2048", 00:30:55.187 "ffdhe3072", 00:30:55.187 "ffdhe4096", 00:30:55.187 "ffdhe6144", 00:30:55.187 "ffdhe8192" 00:30:55.187 ] 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "bdev_nvme_attach_controller", 00:30:55.187 "params": { 00:30:55.187 "name": "nvme0", 00:30:55.187 "trtype": "TCP", 00:30:55.187 "adrfam": "IPv4", 00:30:55.187 "traddr": "127.0.0.1", 00:30:55.187 "trsvcid": "4420", 00:30:55.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.187 "prchk_reftag": false, 00:30:55.187 "prchk_guard": false, 00:30:55.187 "ctrlr_loss_timeout_sec": 0, 00:30:55.187 "reconnect_delay_sec": 0, 00:30:55.187 "fast_io_fail_timeout_sec": 0, 00:30:55.187 "psk": "key0", 00:30:55.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.187 "hdgst": false, 00:30:55.187 "ddgst": false 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "bdev_nvme_set_hotplug", 00:30:55.187 "params": { 00:30:55.187 "period_us": 100000, 00:30:55.187 "enable": false 00:30:55.187 } 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "method": "bdev_wait_for_examine" 00:30:55.187 } 00:30:55.187 ] 00:30:55.187 }, 00:30:55.187 { 00:30:55.187 "subsystem": "nbd", 00:30:55.187 "config": [] 00:30:55.187 } 00:30:55.187 ] 00:30:55.187 }' 00:30:55.187 13:05:25 keyring_file -- keyring/file.sh@114 -- # killprocess 1917033 00:30:55.187 13:05:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1917033 ']' 00:30:55.187 13:05:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1917033 00:30:55.187 13:05:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:55.187 13:05:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:55.187 13:05:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1917033 00:30:55.187 13:05:26 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:55.187 13:05:26 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:55.187 13:05:26 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1917033' 00:30:55.187 killing process with pid 1917033 00:30:55.187 13:05:26 keyring_file -- common/autotest_common.sh@967 -- # kill 1917033 00:30:55.187 Received shutdown signal, test time was about 1.000000 seconds 00:30:55.187 00:30:55.188 Latency(us) 00:30:55.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.188 =================================================================================================================== 00:30:55.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.188 13:05:26 keyring_file -- common/autotest_common.sh@972 -- # wait 1917033 00:30:55.449 13:05:26 keyring_file -- keyring/file.sh@117 -- # bperfpid=1918640 00:30:55.449 13:05:26 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1918640 /var/tmp/bperf.sock 00:30:55.449 13:05:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1918640 ']' 00:30:55.449 13:05:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.449 13:05:26 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:55.449 13:05:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.449 13:05:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.449 13:05:26 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:55.449 "subsystems": [ 00:30:55.449 { 00:30:55.449 "subsystem": "keyring", 00:30:55.449 "config": [ 00:30:55.449 { 00:30:55.449 "method": "keyring_file_add_key", 00:30:55.449 "params": { 00:30:55.449 "name": "key0", 00:30:55.449 "path": "/tmp/tmp.LVesLLjjyP" 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "keyring_file_add_key", 00:30:55.449 "params": { 00:30:55.449 "name": "key1", 00:30:55.449 "path": "/tmp/tmp.3UrqHKgd0z" 00:30:55.449 } 00:30:55.449 } 00:30:55.449 ] 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "subsystem": "iobuf", 00:30:55.449 "config": [ 00:30:55.449 { 00:30:55.449 "method": "iobuf_set_options", 00:30:55.449 "params": { 00:30:55.449 "small_pool_count": 8192, 00:30:55.449 "large_pool_count": 1024, 00:30:55.449 "small_bufsize": 8192, 00:30:55.449 "large_bufsize": 135168 00:30:55.449 } 00:30:55.449 } 00:30:55.449 ] 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "subsystem": "sock", 00:30:55.449 "config": [ 00:30:55.449 { 00:30:55.449 "method": "sock_set_default_impl", 00:30:55.449 "params": { 00:30:55.449 "impl_name": "posix" 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "sock_impl_set_options", 00:30:55.449 "params": { 00:30:55.449 "impl_name": "ssl", 00:30:55.449 "recv_buf_size": 4096, 00:30:55.449 "send_buf_size": 4096, 00:30:55.449 "enable_recv_pipe": true, 00:30:55.449 "enable_quickack": false, 00:30:55.449 "enable_placement_id": 0, 00:30:55.449 "enable_zerocopy_send_server": true, 00:30:55.449 "enable_zerocopy_send_client": false, 00:30:55.449 "zerocopy_threshold": 0, 00:30:55.449 "tls_version": 0, 00:30:55.449 "enable_ktls": false 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "sock_impl_set_options", 00:30:55.449 "params": { 00:30:55.449 "impl_name": "posix", 00:30:55.449 "recv_buf_size": 2097152, 00:30:55.449 "send_buf_size": 2097152, 00:30:55.449 "enable_recv_pipe": true, 00:30:55.449 "enable_quickack": false, 00:30:55.449 "enable_placement_id": 0, 00:30:55.449 "enable_zerocopy_send_server": true, 00:30:55.449 "enable_zerocopy_send_client": false, 00:30:55.449 "zerocopy_threshold": 0, 00:30:55.449 "tls_version": 0, 00:30:55.449 "enable_ktls": false 00:30:55.449 } 00:30:55.449 } 00:30:55.449 ] 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "subsystem": "vmd", 00:30:55.449 "config": [] 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "subsystem": "accel", 00:30:55.449 "config": [ 00:30:55.449 { 00:30:55.449 "method": "accel_set_options", 00:30:55.449 "params": { 00:30:55.449 "small_cache_size": 128, 00:30:55.449 "large_cache_size": 16, 00:30:55.449 "task_count": 2048, 00:30:55.449 "sequence_count": 2048, 00:30:55.449 "buf_count": 2048 00:30:55.449 } 00:30:55.449 } 00:30:55.449 ] 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "subsystem": "bdev", 00:30:55.449 "config": [ 00:30:55.449 { 00:30:55.449 "method": "bdev_set_options", 00:30:55.449 "params": { 00:30:55.449 "bdev_io_pool_size": 65535, 00:30:55.449 "bdev_io_cache_size": 256, 00:30:55.449 "bdev_auto_examine": true, 00:30:55.449 "iobuf_small_cache_size": 128, 00:30:55.449 "iobuf_large_cache_size": 16 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "bdev_raid_set_options", 00:30:55.449 "params": { 00:30:55.449 "process_window_size_kb": 1024 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "bdev_iscsi_set_options", 00:30:55.449 "params": { 00:30:55.449 "timeout_sec": 30 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "bdev_nvme_set_options", 00:30:55.449 "params": { 00:30:55.449 "action_on_timeout": "none", 00:30:55.449 "timeout_us": 0, 00:30:55.449 "timeout_admin_us": 0, 00:30:55.449 "keep_alive_timeout_ms": 10000, 00:30:55.449 "arbitration_burst": 0, 00:30:55.449 "low_priority_weight": 0, 00:30:55.449 "medium_priority_weight": 0, 00:30:55.449 "high_priority_weight": 0, 00:30:55.449 "nvme_adminq_poll_period_us": 10000, 00:30:55.449 "nvme_ioq_poll_period_us": 0, 00:30:55.449 "io_queue_requests": 512, 00:30:55.449 "delay_cmd_submit": true, 00:30:55.449 "transport_retry_count": 4, 00:30:55.449 "bdev_retry_count": 3, 00:30:55.449 "transport_ack_timeout": 0, 00:30:55.449 "ctrlr_loss_timeout_sec": 0, 00:30:55.449 "reconnect_delay_sec": 0, 00:30:55.449 "fast_io_fail_timeout_sec": 0, 00:30:55.449 "disable_auto_failback": false, 00:30:55.449 "generate_uuids": false, 00:30:55.449 "transport_tos": 0, 00:30:55.449 "nvme_error_stat": false, 00:30:55.449 "rdma_srq_size": 0, 00:30:55.449 "io_path_stat": false, 00:30:55.449 "allow_accel_sequence": false, 00:30:55.449 "rdma_max_cq_size": 0, 00:30:55.449 "rdma_cm_event_timeout_ms": 0, 00:30:55.449 "dhchap_digests": [ 00:30:55.449 "sha256", 00:30:55.449 "sha384", 00:30:55.449 "sha512" 00:30:55.449 ], 00:30:55.449 "dhchap_dhgroups": [ 00:30:55.449 "null", 00:30:55.449 "ffdhe2048", 00:30:55.449 "ffdhe3072", 00:30:55.449 "ffdhe4096", 00:30:55.449 "ffdhe6144", 00:30:55.449 "ffdhe8192" 00:30:55.449 ] 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "bdev_nvme_attach_controller", 00:30:55.449 "params": { 00:30:55.449 "name": "nvme0", 00:30:55.449 "trtype": "TCP", 00:30:55.449 "adrfam": "IPv4", 00:30:55.449 "traddr": "127.0.0.1", 00:30:55.449 "trsvcid": "4420", 00:30:55.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.449 "prchk_reftag": false, 00:30:55.449 "prchk_guard": false, 00:30:55.449 "ctrlr_loss_timeout_sec": 0, 00:30:55.449 "reconnect_delay_sec": 0, 00:30:55.449 "fast_io_fail_timeout_sec": 0, 00:30:55.449 "psk": "key0", 00:30:55.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.449 "hdgst": false, 00:30:55.449 "ddgst": false 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "bdev_nvme_set_hotplug", 00:30:55.449 "params": { 00:30:55.449 "period_us": 100000, 00:30:55.449 "enable": false 00:30:55.449 } 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "method": "bdev_wait_for_examine" 00:30:55.449 } 00:30:55.449 ] 00:30:55.449 }, 00:30:55.449 { 00:30:55.449 "subsystem": "nbd", 00:30:55.449 "config": [] 00:30:55.449 } 00:30:55.449 ] 00:30:55.449 }' 00:30:55.449 13:05:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.450 13:05:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.450 [2024-07-15 13:05:26.242075] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:55.450 [2024-07-15 13:05:26.242123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918640 ] 00:30:55.450 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.450 [2024-07-15 13:05:26.306279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.450 [2024-07-15 13:05:26.376020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.709 [2024-07-15 13:05:26.535636] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:56.276 13:05:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:56.276 13:05:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:56.276 13:05:27 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:56.276 13:05:27 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:56.276 13:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.535 13:05:27 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:56.535 13:05:27 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.535 13:05:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:56.535 13:05:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:56.535 13:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.794 13:05:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:56.794 13:05:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:56.794 13:05:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:56.794 13:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:57.054 13:05:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:57.054 13:05:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:57.054 13:05:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LVesLLjjyP /tmp/tmp.3UrqHKgd0z 00:30:57.054 13:05:27 keyring_file -- keyring/file.sh@20 -- # killprocess 1918640 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1918640 ']' 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1918640 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1918640 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1918640' 00:30:57.054 killing process with pid 1918640 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@967 -- # kill 1918640 00:30:57.054 Received shutdown signal, test time was about 1.000000 seconds 00:30:57.054 00:30:57.054 Latency(us) 00:30:57.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.054 =================================================================================================================== 00:30:57.054 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@972 -- # wait 1918640 00:30:57.054 13:05:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1916904 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1916904 ']' 00:30:57.054 13:05:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1916904 00:30:57.054 13:05:28 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:57.054 13:05:28 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:57.313 13:05:28 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916904 00:30:57.313 13:05:28 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:57.313 13:05:28 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:57.313 13:05:28 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916904' 00:30:57.313 killing process with pid 1916904 00:30:57.313 13:05:28 keyring_file -- common/autotest_common.sh@967 -- # kill 1916904 00:30:57.313 [2024-07-15 13:05:28.046890] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:57.313 13:05:28 keyring_file -- common/autotest_common.sh@972 -- # wait 1916904 00:30:57.577 00:30:57.577 real 0m12.131s 00:30:57.577 user 0m28.865s 00:30:57.577 sys 0m2.847s 00:30:57.577 13:05:28 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:57.577 13:05:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:57.577 ************************************ 00:30:57.577 END TEST keyring_file 00:30:57.577 ************************************ 00:30:57.577 13:05:28 -- common/autotest_common.sh@1142 -- # return 0 00:30:57.577 13:05:28 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:57.577 13:05:28 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:57.577 13:05:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:57.577 13:05:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.577 13:05:28 -- common/autotest_common.sh@10 -- # set +x 00:30:57.577 ************************************ 00:30:57.577 START TEST keyring_linux 00:30:57.577 ************************************ 00:30:57.577 13:05:28 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:57.577 * Looking for test storage... 00:30:57.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:57.577 13:05:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:57.577 13:05:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.577 13:05:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.837 13:05:28 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.837 13:05:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.837 13:05:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.837 13:05:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.837 13:05:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.837 13:05:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.838 13:05:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.838 13:05:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:57.838 13:05:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:57.838 /tmp/:spdk-test:key0 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:57.838 13:05:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:57.838 13:05:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:57.838 /tmp/:spdk-test:key1 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1918983 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1918983 00:30:57.838 13:05:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:57.838 13:05:28 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1918983 ']' 00:30:57.838 13:05:28 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.838 13:05:28 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:57.838 13:05:28 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.838 13:05:28 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:57.838 13:05:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:57.838 [2024-07-15 13:05:28.680449] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:57.838 [2024-07-15 13:05:28.680500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918983 ] 00:30:57.838 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.838 [2024-07-15 13:05:28.747922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.097 [2024-07-15 13:05:28.828537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:58.665 13:05:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:58.665 [2024-07-15 13:05:29.502017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.665 null0 00:30:58.665 [2024-07-15 13:05:29.534067] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:58.665 [2024-07-15 13:05:29.534380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.665 13:05:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:58.665 139146916 00:30:58.665 13:05:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:58.665 482770765 00:30:58.665 13:05:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1919217 00:30:58.665 13:05:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1919217 /var/tmp/bperf.sock 00:30:58.665 13:05:29 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1919217 ']' 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:58.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:58.665 13:05:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:58.665 [2024-07-15 13:05:29.603099] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:58.665 [2024-07-15 13:05:29.603140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919217 ] 00:30:58.924 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.924 [2024-07-15 13:05:29.670748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.924 [2024-07-15 13:05:29.749846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.492 13:05:30 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.492 13:05:30 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:59.492 13:05:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:59.492 13:05:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:59.751 13:05:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:59.751 13:05:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:00.009 13:05:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:00.009 13:05:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:00.268 [2024-07-15 13:05:30.974351] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:00.268 nvme0n1 00:31:00.268 13:05:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:00.268 13:05:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:00.268 13:05:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:00.268 13:05:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:00.268 13:05:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:00.268 13:05:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:00.527 13:05:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.527 13:05:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:00.527 13:05:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@25 -- # sn=139146916 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@26 -- # [[ 139146916 == \1\3\9\1\4\6\9\1\6 ]] 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 139146916 00:31:00.527 13:05:31 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:00.528 13:05:31 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:00.787 Running I/O for 1 seconds... 00:31:01.726 00:31:01.726 Latency(us) 00:31:01.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:01.726 nvme0n1 : 1.01 17891.91 69.89 0.00 0.00 7126.89 5898.24 15272.74 00:31:01.726 =================================================================================================================== 00:31:01.726 Total : 17891.91 69.89 0.00 0.00 7126.89 5898.24 15272.74 00:31:01.726 0 00:31:01.726 13:05:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:01.726 13:05:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:01.985 13:05:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:01.985 13:05:32 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:01.985 13:05:32 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:01.985 13:05:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:02.245 [2024-07-15 13:05:33.085740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:02.245 [2024-07-15 13:05:33.086055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c1fd0 (107): Transport endpoint is not connected 00:31:02.245 [2024-07-15 13:05:33.087050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c1fd0 (9): Bad file descriptor 00:31:02.245 [2024-07-15 13:05:33.088055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:02.245 [2024-07-15 13:05:33.088066] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:02.245 [2024-07-15 13:05:33.088073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:02.245 request: 00:31:02.245 { 00:31:02.245 "name": "nvme0", 00:31:02.245 "trtype": "tcp", 00:31:02.245 "traddr": "127.0.0.1", 00:31:02.245 "adrfam": "ipv4", 00:31:02.245 "trsvcid": "4420", 00:31:02.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.245 "prchk_reftag": false, 00:31:02.245 "prchk_guard": false, 00:31:02.245 "hdgst": false, 00:31:02.245 "ddgst": false, 00:31:02.245 "psk": ":spdk-test:key1", 00:31:02.245 "method": "bdev_nvme_attach_controller", 00:31:02.245 "req_id": 1 00:31:02.245 } 00:31:02.245 Got JSON-RPC error response 00:31:02.245 response: 00:31:02.245 { 00:31:02.245 "code": -5, 00:31:02.245 "message": "Input/output error" 00:31:02.245 } 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@33 -- # sn=139146916 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 139146916 00:31:02.245 1 links removed 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@33 -- # sn=482770765 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 482770765 00:31:02.245 1 links removed 00:31:02.245 13:05:33 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1919217 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1919217 ']' 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1919217 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1919217 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1919217' 00:31:02.245 killing process with pid 1919217 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@967 -- # kill 1919217 00:31:02.245 Received shutdown signal, test time was about 1.000000 seconds 00:31:02.245 00:31:02.245 Latency(us) 00:31:02.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.245 =================================================================================================================== 00:31:02.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.245 13:05:33 keyring_linux -- common/autotest_common.sh@972 -- # wait 1919217 00:31:02.503 13:05:33 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1918983 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1918983 ']' 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1918983 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1918983 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1918983' 00:31:02.503 killing process with pid 1918983 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@967 -- # kill 1918983 00:31:02.503 13:05:33 keyring_linux -- common/autotest_common.sh@972 -- # wait 1918983 00:31:02.762 00:31:02.762 real 0m5.281s 00:31:02.762 user 0m9.357s 00:31:02.762 sys 0m1.567s 00:31:02.762 13:05:33 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:02.762 13:05:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:02.762 ************************************ 00:31:02.762 END TEST keyring_linux 00:31:02.762 ************************************ 00:31:03.021 13:05:33 -- common/autotest_common.sh@1142 -- # return 0 00:31:03.021 13:05:33 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:03.021 13:05:33 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:03.021 13:05:33 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:03.021 13:05:33 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:03.022 13:05:33 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:03.022 13:05:33 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:03.022 13:05:33 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:03.022 13:05:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:03.022 13:05:33 -- common/autotest_common.sh@10 -- # set +x 00:31:03.022 13:05:33 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:03.022 13:05:33 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:03.022 13:05:33 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:03.022 13:05:33 -- common/autotest_common.sh@10 -- # set +x 00:31:08.297 INFO: APP EXITING 00:31:08.297 INFO: killing all VMs 00:31:08.297 INFO: killing vhost app 00:31:08.297 INFO: EXIT DONE 00:31:10.833 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:10.833 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:10.833 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:13.371 Cleaning 00:31:13.371 Removing: /var/run/dpdk/spdk0/config 00:31:13.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:13.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:13.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:13.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:13.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:13.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:13.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:13.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:13.630 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:13.630 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:13.630 Removing: /var/run/dpdk/spdk1/config 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:13.630 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:13.630 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:13.630 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:13.630 Removing: /var/run/dpdk/spdk2/config 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:13.630 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:13.630 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:13.630 Removing: /var/run/dpdk/spdk3/config 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:13.630 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:13.630 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:13.630 Removing: /var/run/dpdk/spdk4/config 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:13.630 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:13.630 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:13.630 Removing: /dev/shm/bdev_svc_trace.1 00:31:13.630 Removing: /dev/shm/nvmf_trace.0 00:31:13.630 Removing: /dev/shm/spdk_tgt_trace.pid1529709 00:31:13.630 Removing: /var/run/dpdk/spdk0 00:31:13.630 Removing: /var/run/dpdk/spdk1 00:31:13.630 Removing: /var/run/dpdk/spdk2 00:31:13.630 Removing: /var/run/dpdk/spdk3 00:31:13.630 Removing: /var/run/dpdk/spdk4 00:31:13.630 Removing: /var/run/dpdk/spdk_pid1527453 00:31:13.630 Removing: /var/run/dpdk/spdk_pid1528513 00:31:13.630 Removing: /var/run/dpdk/spdk_pid1529709 00:31:13.630 Removing: /var/run/dpdk/spdk_pid1530371 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1531682 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1531920 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1532895 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1533126 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1533351 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1534941 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1536014 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1536299 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1536589 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1537004 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1537382 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1537591 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1537800 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1538090 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1538913 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1541911 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1542171 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1542433 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1542640 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1542987 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1543170 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1543660 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1543704 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1544059 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1544166 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1544424 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1544610 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1544994 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1545249 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1545536 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1545800 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1545938 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1546106 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1546361 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1546607 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1546860 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1547105 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1547361 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1547613 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1547874 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1548128 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1548375 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1548620 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1548875 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1549121 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1549378 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1549626 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1549874 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1550127 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1550387 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1550669 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1550932 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1551218 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1551415 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1551725 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1555524 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1600170 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1604421 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1614420 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1619936 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1624320 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1625013 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1631028 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1637269 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1637277 00:31:13.889 Removing: /var/run/dpdk/spdk_pid1638187 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1639046 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1639802 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1640492 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1640507 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1640798 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1640952 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1640955 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1641876 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1642788 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1643584 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1644185 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1644337 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1644628 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1645877 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1646846 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1655181 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1655434 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1659689 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1666063 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1668681 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1679137 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1688176 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1689893 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1690824 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1707544 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1711652 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1736928 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1741422 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1743028 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1744923 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1745101 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1745333 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1745579 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1746302 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1748650 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1749644 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1750148 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1752251 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1752968 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1753696 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1757769 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1767901 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1771739 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1777939 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1779237 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1780781 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1785082 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1789164 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1797211 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1797215 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1801921 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1802148 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1802275 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1802618 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1802629 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1807101 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1807668 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1812015 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1814762 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1820370 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1825938 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1834644 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1841943 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1841992 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1860517 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1861214 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1861907 00:31:14.150 Removing: /var/run/dpdk/spdk_pid1862476 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1863360 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1864056 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1864726 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1865237 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1869508 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1869881 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1875784 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1876059 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1878285 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1886533 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1886643 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1891792 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1893759 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1895722 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1896767 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1898851 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1900023 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1908756 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1909225 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1909892 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1912160 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1912625 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1913091 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1916904 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1917033 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1918640 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1918983 00:31:14.447 Removing: /var/run/dpdk/spdk_pid1919217 00:31:14.447 Clean 00:31:14.447 13:05:45 -- common/autotest_common.sh@1451 -- # return 0 00:31:14.447 13:05:45 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:14.447 13:05:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.447 13:05:45 -- common/autotest_common.sh@10 -- # set +x 00:31:14.447 13:05:45 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:14.447 13:05:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.447 13:05:45 -- common/autotest_common.sh@10 -- # set +x 00:31:14.447 13:05:45 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:14.447 13:05:45 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:14.447 13:05:45 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:14.447 13:05:45 -- spdk/autotest.sh@391 -- # hash lcov 00:31:14.447 13:05:45 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:14.447 13:05:45 -- spdk/autotest.sh@393 -- # hostname 00:31:14.447 13:05:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:14.705 geninfo: WARNING: invalid characters removed from testname! 00:31:36.633 13:06:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:37.199 13:06:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:39.102 13:06:09 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:41.007 13:06:11 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:42.914 13:06:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:44.290 13:06:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:46.197 13:06:17 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:46.197 13:06:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.197 13:06:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:46.197 13:06:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.197 13:06:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.197 13:06:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.197 13:06:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.197 13:06:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.197 13:06:17 -- paths/export.sh@5 -- $ export PATH 00:31:46.197 13:06:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.197 13:06:17 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:46.197 13:06:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:46.197 13:06:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721041577.XXXXXX 00:31:46.197 13:06:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721041577.wcr8VC 00:31:46.197 13:06:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:46.197 13:06:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:46.197 13:06:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:46.197 13:06:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:46.197 13:06:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:46.197 13:06:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:46.197 13:06:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:46.197 13:06:17 -- common/autotest_common.sh@10 -- $ set +x 00:31:46.197 13:06:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:46.197 13:06:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:46.197 13:06:17 -- pm/common@17 -- $ local monitor 00:31:46.197 13:06:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.197 13:06:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.197 13:06:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.197 13:06:17 -- pm/common@21 -- $ date +%s 00:31:46.197 13:06:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:46.197 13:06:17 -- pm/common@21 -- $ date +%s 00:31:46.197 13:06:17 -- pm/common@25 -- $ sleep 1 00:31:46.197 13:06:17 -- pm/common@21 -- $ date +%s 00:31:46.197 13:06:17 -- pm/common@21 -- $ date +%s 00:31:46.197 13:06:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041577 00:31:46.197 13:06:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041577 00:31:46.197 13:06:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041577 00:31:46.197 13:06:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721041577 00:31:46.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041577_collect-vmstat.pm.log 00:31:46.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041577_collect-cpu-load.pm.log 00:31:46.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041577_collect-cpu-temp.pm.log 00:31:46.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721041577_collect-bmc-pm.bmc.pm.log 00:31:47.396 13:06:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:47.396 13:06:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:47.396 13:06:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:47.396 13:06:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:47.396 13:06:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:47.396 13:06:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:47.396 13:06:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:47.396 13:06:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:47.396 13:06:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:47.396 13:06:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:47.396 13:06:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:47.396 13:06:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:47.396 13:06:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:47.396 13:06:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:47.396 13:06:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:47.396 13:06:18 -- pm/common@44 -- $ pid=1930033 00:31:47.396 13:06:18 -- pm/common@50 -- $ kill -TERM 1930033 00:31:47.396 13:06:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:47.396 13:06:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:47.396 13:06:18 -- pm/common@44 -- $ pid=1930035 00:31:47.396 13:06:18 -- pm/common@50 -- $ kill -TERM 1930035 00:31:47.396 13:06:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:47.396 13:06:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:47.396 13:06:18 -- pm/common@44 -- $ pid=1930036 00:31:47.396 13:06:18 -- pm/common@50 -- $ kill -TERM 1930036 00:31:47.396 13:06:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:47.396 13:06:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:47.396 13:06:18 -- pm/common@44 -- $ pid=1930059 00:31:47.396 13:06:18 -- pm/common@50 -- $ sudo -E kill -TERM 1930059 00:31:47.396 + [[ -n 1422707 ]] 00:31:47.396 + sudo kill 1422707 00:31:47.406 [Pipeline] } 00:31:47.427 [Pipeline] // stage 00:31:47.433 [Pipeline] } 00:31:47.451 [Pipeline] // timeout 00:31:47.458 [Pipeline] } 00:31:47.479 [Pipeline] // catchError 00:31:47.485 [Pipeline] } 00:31:47.504 [Pipeline] // wrap 00:31:47.512 [Pipeline] } 00:31:47.555 [Pipeline] // catchError 00:31:47.564 [Pipeline] stage 00:31:47.566 [Pipeline] { (Epilogue) 00:31:47.581 [Pipeline] catchError 00:31:47.582 [Pipeline] { 00:31:47.597 [Pipeline] echo 00:31:47.599 Cleanup processes 00:31:47.604 [Pipeline] sh 00:31:47.886 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:47.886 1930170 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:47.886 1930434 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:47.903 [Pipeline] sh 00:31:48.248 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:48.248 ++ grep -v 'sudo pgrep' 00:31:48.248 ++ awk '{print $1}' 00:31:48.248 + sudo kill -9 1930170 00:31:48.261 [Pipeline] sh 00:31:48.546 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:58.535 [Pipeline] sh 00:31:58.817 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:58.818 Artifacts sizes are good 00:31:58.833 [Pipeline] archiveArtifacts 00:31:58.840 Archiving artifacts 00:31:59.020 [Pipeline] sh 00:31:59.303 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:59.320 [Pipeline] cleanWs 00:31:59.331 [WS-CLEANUP] Deleting project workspace... 00:31:59.331 [WS-CLEANUP] Deferred wipeout is used... 00:31:59.337 [WS-CLEANUP] done 00:31:59.340 [Pipeline] } 00:31:59.361 [Pipeline] // catchError 00:31:59.373 [Pipeline] sh 00:31:59.657 + logger -p user.info -t JENKINS-CI 00:31:59.665 [Pipeline] } 00:31:59.680 [Pipeline] // stage 00:31:59.684 [Pipeline] } 00:31:59.700 [Pipeline] // node 00:31:59.705 [Pipeline] End of Pipeline 00:31:59.749 Finished: SUCCESS